Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist, Product Data & Analytics Senior Data Scientist, Product Data & Analytics Our Vision: Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. We are a hands-on global team providing scalable end-to-end data solutions by working closely with the business. We influence decisions across Mastercard through data driven insights. We are a team on analytics engineers, data architects, BI developers, data analysts and data scientists, and fully manage our own data assets and solutions. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across the continents? Are you interested in proactively looking to improve data driven decisions for a global corporation? Role Responsible for developing data-driven innovative scalable analytical solutions and identifying opportunities to support business and client needs in a quantitative manner and facilitate informed recommendations / decisions. Accountable for delivering high quality project solutions and tools within agreed upon timelines and budget parameters and conducting post- implementation reviews. Contributes to the development of custom analyses and solutions, derives insights from extracted data to solve critical business questions. Activities include developing and creating predictive models, behavioural segmentation frameworks, profitability analyses, ad hoc reporting, and data visualizations. Able to develop AI/ML capabilities, as needed on large volumes of data to support analytics and reporting needs across products, markets and services. Able to build end to end reusable, multi-purpose AI models to drive automated insights and recommendations. Leverage open and closed source technologies to solve business problems. Work closely with global & regional teams to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets, and services. Support initiatives in developing predictive models, behavioural segmentation frameworks, profitability analyses, ad hoc reporting, and data visualizations. Translates client/ stakeholder needs into technical analyses and/or custom solutions in collaboration with internal and external partners, derive insights and present findings and outcomes to clients/stakeholders to solve critical business questions. Create repeatable processes to support development of modelling and reporting Delegate and reviews work for junior level colleagues to ensure downstream applications and tools are not compromised or delayed. Serves as a mentor for junior-level colleagues, and develops talent via ongoing technical training, peer review etc. All About You 6-8 years of experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis. Advanced SQL skills, ability to write optimized queries for large data sets. Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack, Cloud, Snowflake, and other relevant technologies. Data visualization tools (Tableau, Domo, and/or Power BI/similar tools) experience is a plus Experience with data validation, quality control and cleansing processes to new and existing data sources. Experience on Classical and Deep Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks - Feedforward, CNN, NLP, etc. Experience on Deep Learning algorithm techniques, open-source tools and technologies, statistical tools, and programming environments such as Python, R, and Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning. Experience in automating and creating data pipeline via tools such as Alteryx, SSIS. Nifi is a plus Financial Institution or a Payments experience a plus Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills. Ownership of end-to-end Project Delivery/Risk Mitigation Virtual team management and manage stakeholders by influence Analytical/Problem Solving Able to prioritize and perform multiple tasks simultaneously Able to work across varying time zone. Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency. In depth technical knowledge, drive, and ability to learn new technologies. Must be able to interact with management, internal stakeholders Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must. Abide by Mastercard’s security policies and practices. Ensure the confidentiality and integrity of the information being accessed. Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. #AI Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. The Mastercard Launch program is aimed at early career talent, to help you develop skills and gain cross-functional work experience. Over a period of 18 months, Launch participants will be assigned to a business unit, learn and develop skills, and gain valuable on the job experience. Mastercard has over 2 billion payment cards issued by 25,000+ banks across 190+ countries and territories, amassing over 10 petabytes of data. Millions of transactions are flowing to Mastercard in real-time providing an ideal environment to apply and leverage AI at scale. The AI team is responsible for building and deploying innovative AI solutions for all divisions within Mastercard securing a competitive advantage. Our objectives include achieving operational efficiency, improving customer experience, and ensuring robust value propositions of our core products (Credit, Debit, Prepaid) and services (recommendation engine, anti-money laundering, fraud risk management, cybersecurity) Role Gather relevant information to define the business problem Creative thinker capable of linking AI methodologies to identified business challenges Develop AI/ML applications leveraging the latest industry and academic advancements Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda All About You : Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Concentration in Computer Science Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Manager Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities Experience All About You Experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: Confluence (Documentation) Bitbucket/Stash (Code Sharing) Shared Folders (File Sharing) ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required #AI1 Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. The Mastercard Launch program is aimed at early career talent, to help you develop skills and gain cross-functional work experience. Over a period of 18 months, Launch participants will be assigned to a business unit, learn and develop skills, and gain valuable on the job experience. Mastercard has over 2 billion payment cards issued by 25,000+ banks across 190+ countries and territories, amassing over 10 petabytes of data. Millions of transactions are flowing to Mastercard in real-time providing an ideal environment to apply and leverage AI at scale. The AI team is responsible for building and deploying innovative AI solutions for all divisions within Mastercard securing a competitive advantage. Our objectives include achieving operational efficiency, improving customer experience, and ensuring robust value propositions of our core products (Credit, Debit, Prepaid) and services (recommendation engine, anti-money laundering, fraud risk management, cybersecurity) Role Gather relevant information to define the business problem Creative thinker capable of linking AI methodologies to identified business challenges Develop AI/ML applications leveraging the latest industry and academic advancements Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda All About You : Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Concentration in Computer Science Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities Experience All About You Experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: Confluence (Documentation) Bitbucket/Stash (Code Sharing) Shared Folders (File Sharing) ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
The ideal candidate's favourite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the correct answers. Responsibilities Analyze raw data: assessing quality, cleansing, and structuring for downstream processing. Design accurate and scalable prediction algorithms. Molecular Docking and Simulation. Integrating Biological Data. Computational Biology related works. Qualifications Bachelor's degree or equivalent experience in a quantitative field (Bio-informatics, Biotechnology, Statistics, Computer Science, Engineering, etc.) At least 1 - 2 years of experience in quantitative analytics or data modelling Freshers can also apply.* Deep understanding of predictive modelling, machine learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, R, C, C++, Java, SQL),
Posted 3 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : Big Data Developer Location : Chennai Experience : 7+ years Work Mode : Work from Office Key Skills Required Google Cloud Platform (GCP) BigQuery (BQ) Dataflow Dataproc Cloud Spanner Strong knowledge of distributed systems, data processing frameworks, and big data architecture. Proficiency in programming languages like Python, Java, or Scala. Roles And Responsibilities BigQuery (BQ): Design and develop scalable data warehouses using BigQuery. Optimize SQL queries for performance and cost-efficiency in BigQuery. Implement data partitioning and clustering strategies. Dataflow: Build and maintain batch and streaming data pipelines using Apache Beam on GCP Dataflow. Ensure data transformation, enrichment, and cleansing as per business needs. Monitor and troubleshoot pipeline performance issues. Dataproc: Develop and manage Spark and Hadoop jobs on GCP Dataproc. Perform ETL/ELT operations using PySpark, Hive, or other tools. Automate and orchestrate jobs for scheduled data workflows. Cloud Spanner: Design and manage globally distributed, scalable transactional databases using Cloud Spanner. Optimize schema and query design for performance and reliability. Implement high availability and disaster recovery strategies. General Responsibilities: Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Implement data quality and data governance best practices. Ensure security and compliance with GCP data handling standards. Participate in code reviews, CI/CD deployments, and Agile development cycles.
Posted 3 days ago
3.0 - 9.0 years
0 Lacs
karnataka
On-site
The DBA Lead with MySQL Expertise is responsible for designing, implementing, and maintaining critical MySQL database systems. You will provide strong technical leadership to mentor a team of DBAs, ensuring the integrity, availability, and security of the organization's data. Collaboration with development teams to optimize database performance and ensure operational efficiency is a key aspect of this role. In the realm of Database Architecture & Design, you will design and implement scalable and highly available MySQL database solutions. Collaboration with application development teams to understand data requirements and ensure optimal database performance will be crucial. Additionally, you will be responsible for developing and enforcing database standards, procedures, and best practices. For Database Administration & Maintenance, your duties will include installing, configuring, upgrading, and patching MySQL databases. Monitoring and addressing database performance issues proactively, implementing backup and recovery procedures, and ensuring data integrity and security are essential tasks. Performance Tuning & Optimization will involve identifying and resolving performance bottlenecks, optimizing queries and indexes for enhanced efficiency, and continuously monitoring and fine-tuning database systems for optimal performance. In terms of Troubleshooting & Support, you will diagnose and resolve complex database-related issues, provide technical support to development teams, and participate in an on-call rotation for critical database issues. As a Team Leader & Mentor, you will offer technical guidance and mentorship to the DBA team, delegate tasks effectively, and foster a collaborative team environment. Required Skills & Qualifications: - Technical Expertise: Extensive experience in MySQL database administration including installation, configuration, and performance tuning. Strong knowledge of SQL and database design principles. Experience with replication, clustering, and high availability solutions. Familiarity with Linux/Unix operating systems. Proficiency in scripting languages (e.g., Bash, Python) for automation. - Leadership & Communication: Proven leadership and mentoring experience in a technical setting. Excellent communication and collaboration skills. Strong analytical and problem-solving abilities. Education & Experience: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 9+ years of experience in MySQL database administration. - 3+ years of experience in a team lead or supervisory role. - MySQL certifications are a plus. Benefits include a competitive salary and benefits package, the opportunity to work with cutting-edge technologies, a collaborative and supportive work environment, and potential for professional growth and advancement.,
Posted 3 days ago
8.0 - 15.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As the Manager- Data Science at Holcim, you will play a crucial role in the Groups Global Advanced Analytics CoE by enabling our Businesses for Insights Driven Operations and Decision making through the utilization of cutting edge Analytics tools and techniques. Your primary responsibility will involve working closely with Business / Domain Subject Matter Experts to understand pain points and opportunities, and develop analytical models to identify patterns and predict outcomes of key business processes. You will be required to identify the most suitable modeling techniques and apply Machine Learning and Deep Learning Algorithms to create self-correcting models and algorithms. It will be essential to collaborate with Product Development teams to industrialize AI / ML models and conduct rapid and iterative prototyping of minimum viable solutions. Additionally, you will test hypotheses on raw datasets to derive meaningful insights and identify new opportunity areas. Your role will encompass all aspects of data including data acquisition, data exploration, feature engineering, building and optimizing models, and deploying Gen AI Solutions. You will be involved in designing full stack ML solutions in a distributed computing environment such as AWS and GCP, with the possibility of developing ML solutions for deployment on the edge or mobile devices if required. To be successful in this role, you should possess a total experience of 12-15 years with at least 8 years of relevant Analytics experience. Industry Experience and knowledge, particularly in Manufacturing/ Operations Functions within the Building Material Industry, Manufacturing, Process, or Pharma sectors, is preferred. Hands-on experience in statistical and data science techniques, as well as developing and deploying Gen AI Solutions, will be critical for this position. In terms of technical skills, you should have over 8 years of hands-on experience in advanced Machine Learning & Deep Learning techniques and algorithms, such as Decision Trees, Random Forests, SVMs, Regression, Clustering, Neural Networks, CNNs, RNNs, LSTMs, and Transformers. Proficiency in statistical computer languages like Python and PySpark to manipulate data and draw insights from large datasets is essential. Experience with Cloud platforms like AWS, DL frameworks such as TensorFlow, Keras, or PyTorch, and familiarity with business intelligence tools and data frameworks will be advantageous. In addition to technical skills, leadership and soft skills are equally important for this role. You should lead by example on values and culture, be open-minded, collaborative, and an effective team player. Working in a multicultural and diverse team, dealing with ambiguity, and communicating openly and effectively with various stakeholders are key aspects of this position. Being driven for success, aspiring to a culture of service excellence, and always prioritizing customer satisfaction, people, and business are qualities that will set you up for success in this role. If you are motivated by the opportunity to make a significant impact through data-driven decisions and innovative solutions, we invite you to build your future with us at Holcim by applying for this role.,
Posted 3 days ago
0 years
0 Lacs
Delhi, India
On-site
Selected Intern's Day-to-day Responsibilities Include Collect and extract data from primary and secondary sources using automated tools, databases, APIs, or web scraping techniques. Assist in cleaning, preprocessing, and organizing data for analysis and reporting purposes. Support the identification of patterns, trends, and anomalies in structured and unstructured datasets. Work with tools such as Python, Excel, SQL, or R to process large volumes of data. Collaborate with analysts to build and maintain data dashboards, visualizations, and reports. Assist in applying data mining techniques such as classification, clustering, regression, and association rule mining. Help maintain documentation and data integrity standards to ensure accurate results. Conduct competitor or market intelligence research using data scraping and public databases (if applicable). Stay updated on data privacy regulations and ensure ethical handling of sensitive data. About Company: LogicizeIP is devoted to helping individual entrepreneurs and inventors, institutions, universities, and companies secure their IP rights for their ideas, inventions, and designs at an affordable cost worldwide. We have started our journey with the mission of making IP services easier for new users and existing users to save their creations. We aim to experience IP services in the most convenient & hassle-free way. We ensure that your idea always belongs to you and is completely protected. Our vision is to encourage inventiveness and shield it.
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: We're seeking a skilled and motivated ML/AI Engineer to join our team and drive end-to-end AI development for cutting-edge healthcare prediction models. As part of our AI delivery team you will be working on designing, developing, and deploying ML models to solve complex business challenges. Key Responsibilities: Design, develop, and deploy scalable machine learning models and AI solutions. Collaborate with engineers and product managers to understand business requirements and translate them into technical solutions. Analyse and preprocess large datasets for training and testing ML models. Experiment with different ML algorithms and techniques to improve model performance. Skills & Qualifications: Experience: B.Tech with 2-5 years of relevant experience in Machine Learning, AI, or Data Science roles. OR M.Tech, M. Statistics with 2-4 years of relevant experiencein Machine Learning, AI, or Data Science roles. Education: Bachelor's/master's degree in computer science or Statistics or any other relevant engineering / AI course from a Tier 1 or Tier 2 college. Technical Skills: Proficiency in programming languages such asPython & R Strong knowledge of classicalmachine learning algorithms with hands on experience in the following: Supervised (Classification model, regressions models) Unsupervised, (Clustering Algorithms, Autoencoders) Ensemble Models (Stacking, Bagging, Boosting techniques, Random Forest, XGBoost) Experience in data preprocessing, feature engineering, and handling large-scale datasets. Model evaluation techniques like accuracy, precision, recall, F1 score, AUC-ROC for classification, and MAE, MSE, RMSE, R-squared for regression. Explainable AI (XAI) techniques include methods like SHAP values, LIME, feature importance from decision trees, and partial dependence plots. Experience withML frameworkslikeTensorFlow, PyTorch, Scikit-learn,orKeras. Building & deploying Model APIs using framework like Flask, Fast API, Django, TensorFlow Serving etc. Knowledge ofcloud platformslike Azure (preferred), AWS, GCP and experience deploying models in such environments -> changed the wording, added points. Nice to Have: Object Oriented Programming. Familiarity withNLP, ortime seriesanalysis. -> Moving this into Nice to have, not bare min Exposure todeep learningmodels (RNN, LSTM) and working with GPUs.
Posted 3 days ago
5.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be joining a leading AI-driven Global Supply Chain Solutions Software Product Company, recognized as one of Glassdoor's Best Places to Work. As a Data Science Consultant in our Supply Chain Management team, your primary responsibility will be to utilize data-driven insights for optimizing supply chain processes, enhancing decision-making, and achieving business results. Collaborating with cross-functional teams, you will employ advanced data analytics, machine learning, and optimization techniques to improve supply chain efficiency, forecast demand, and address complex operational challenges. Your key responsibilities will include: - Conducting Data Analysis & Modeling: Analyze large, complex datasets related to demand forecasting and supply planning to extract actionable insights. - Implementing Machine Learning Techniques: Apply statistical and machine learning methods to forecast demand, involving data ingestion, visualization, feature engineering, model configuration, fine-tuning, and clear presentation of results. - Utilizing Supply Chain Optimization Algorithms: Deploy optimization algorithms to enhance supply chain efficiency, reduce costs, and improve service levels. - Providing Consulting & Advisory Support: Serve as a subject matter expert in supply chain analytics, offering guidance on best practices and strategies for effective supply chain management. - Driving Process Improvement: Identify inefficiencies within the supply chain and propose data-driven strategies for continuous enhancement. We are looking for candidates with the following qualifications: - Educational Background: Bachelors or Masters degree in Data Science, Computer Science, Industrial Engineering, Operations Research, Supply Chain Management, or a related field. - Experience: 5-12 years of experience in data science or Supply Chain Operations Analytics, with a focus on forecasting projects utilizing statistical/ML models and data visualization tools. - Technical Skills: Proficiency in data science tools and programming languages such as Python, R, SQL, along with experience in supply chain management platforms/processes. Strong applied statistics skills are required. - Machine Learning Expertise: Hands-on experience with machine learning techniques/algorithms (e.g., regression, classification, clustering) and optimization models. - Industry Knowledge: Familiarity with supply chain processes like demand forecasting, inventory management, procurement, logistics, and distribution. - Analytical Thinking: Strong problem-solving abilities to analyze complex data sets and translate insights into actionable recommendations. - Personal Competencies: Ability to work independently or as part of a team in a consulting environment, attention to detail in managing large data sets, and effective communication skills for presenting findings to non-technical stakeholders. If you resonate with our core values and are passionate about fostering diversity, inclusion, value, and equity, we invite you to be a part of our team at Blue Yonder. Explore our Diversity Report and join us in celebrating the differences that make us stronger together.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Would being part of a digital transformation excite you Are you passionate about infrastructure security Join our digital transformation team. We operate at the heart of the digital transformation of our business. Our team is responsible for the cybersecurity, architecture, and data protection for our global organization. We advise on the design and validation of all systems, infrastructure, technologies, and data protection. Partner the best. As a Staff Infrastructure Architect, you will support the design and execution of our infrastructure security roadmap. Collaborating with global teams and customers, you will help architect solutions that enable our business to grow in an agile and secure manner. You will be responsible for supporting and improving our tools/process for continuous deployment management, supporting solution Infra Architect to deploy the application and infra to customer private/public cloud, debugging the Docker images/containers, Kubernetes clusters issues, building monitoring tools around Kubernetes/AKS clusters, and developing process tools to track the customer releases and create update plans. You will also be responsible for developing processes to ensure the patching/updates take place without affecting the operation SLA, meeting availability SLA working with Infra and application team responsible for 24x7, profiling deployment process and identifying bottlenecks, demonstrating expertise in writing scripts to automate tasks, implementing Continuous Integration/Deployment build principles, providing expertise in the quality engineering, test planning, and testing methodology for developed code/images/containers, and helping businesses develop an overall strategy for deploying code. To be successful in this role, you will need a Bachelor's education in Computer Science, IT, or Engineering, at least 4+ years of production experience providing hands-on technical expertise to design, deploy, secure, and optimize Cloud services, hands-on experience with containerization technologies (Docker, Kubernetes) is a must (minimum 2 years), experience with creating, maintaining, and deploying automated build tools for a minimum of 2 years, in-depth knowledge of Clustering, Load Balancing, High Availability, and Disaster Recovery, Auto Scaling, Infrastructure-as-a-code (IaaC) using Terraform/CloudFormation, good knowledge of Application & Infrastructure Monitoring Tools like Prometheus, Grafana, Kibana, New Relic, Nagios, hands-on experience of CI/CD tools like Jenkins, understanding of standard networking concepts such as DNS, DHCP, subnets, Server Load Balancing, Firewalls, knowledge of Web-based application development, strong knowledge of Unix/Linux and/or Windows operating systems, experience with common scripting languages (Bash, Perl, Python, Ruby), and the ability to assess code, build it, and run applications locally on his/her own. Additionally, you should have experience with creating and maintaining automated build tools, facilitating and coaching software engineering team sessions on requirements estimation and alternative approaches to team sizing and estimation, publishing guidance and documentation to promote adoption of design, proposing design solutions based on research and synthesis, creating general design principles that capture the vision and critical concerns for a program, and demonstrating mastery of the intricacies of interactions and dynamics in Agile teams. We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer flexible working patterns, including working remotely from home or any other work location and flexibility in your work schedule to help fit in around life. Talk to us about your desired flexible working options when you apply. Our people are at the heart of what we do at Baker Hughes. We know we are better when all of our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward making it safer, cleaner, and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress Join us and become part of a team of people who will challenge and inspire you! Let's come together and take energy forward.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
madurai, tamil nadu
On-site
As a DBA Expert, you will be required to have strong knowledge of CouchDB architecture, including replication and clustering. Your expertise should include experience with JSON, MapReduce functions, and query optimization for efficient data retrieval. You should be skilled in CouchDB performance tuning and effective indexing strategies to ensure optimal database functionality. Additionally, familiarity with monitoring tools like Prometheus, Grafana, etc., for real-time performance tracking is essential. Your responsibilities will also include understanding backup strategies, including full backups, incremental backups, replication-based backups, automated scheduling, storage policies, and restoration processes. You should be capable of troubleshooting database locks, conflicts, and indexing delays to prevent performance bottlenecks and ensure smooth operations. The ideal candidate for this role should possess strong experience with Apache CouchDB and a solid understanding of NoSQL database concepts, document stores, and JSON-based structures. Experience with CouchDB replication, clustering, and eventual consistency models is a must. Knowledge of data backup/recovery and security best practices is also crucial for this position. In addition to technical expertise, you should have excellent problem-solving and communication skills to effectively collaborate with team members and stakeholders. Your ability to analyze complex database issues and propose effective solutions will be key to success in this role.,
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description What you’ll do: BS/MS in Computer Science or Data Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background General understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, Neural network, Graph ML, etc Experience building data science-driven solutions including data collection, feature selection, model training, post-deployment validation Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Works well in a team setting and is self-driven Desired Experience What you need to bring: Experience with some/equivalent: AWS, Flink, Spark, Kafka, Elastic Search, Kubeflow Knowledge with NLP technology Demonstrable problem-solving ability Conceptual understanding of system design concepts Responsibilities Collaborate with team to understand feature, work with domain experts to identify relevant “signals” during feature engineering, deliver generic and performant ML solutions Keep up to date with newest technology trends Communicate results and ideas to key decision makers Implement new statistical or other mathematical methodologies as needed for specific models or analysis Optimize joint development efforts through appropriate database use and project design Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_02 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 4 days ago
7.0 - 11.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Specialist Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 7 to 11 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains: Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications: Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Building data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Utilizing advanced statistical and machine learning techniques to develop models that can assist in decision-making and strategic planning. Refining and improving data science models based on feedback, new data, and evolving business needs. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment.
Posted 4 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Microsoft Power Business Intelligence (BI) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work. Your typical day will involve analyzing requirements, designing software solutions, writing code, and conducting testing to ensure the quality of the application. You will collaborate with team members and actively participate in discussions to provide solutions to work-related problems. Your role will require you to work independently and become a subject matter expert in your field. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Analyze requirements and design software solutions. - Write code to implement software solutions. - Conduct testing to ensure the quality of the application. - Collaborate with team members to provide solutions to work-related problems. - Participate in team discussions and contribute actively. - Provide maintenance, enhancements, and/or development work. - Stay updated with the latest industry trends and technologies. - Assist in troubleshooting and resolving software defects. - Document software designs, code, and test cases. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Power Business Intelligence (BI). - Good To Have Skills: Experience with ASP.NET MVC. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Microsoft Power Business Intelligence (BI). - This position is based at our Mumbai office. - A 15 years full-time education is required., 15 years full time education
Posted 4 days ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry
Posted 4 days ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: Here at ShyftLabs, we are looking for an experienced Data Scientist who can derive performance improvement and cost efficiency in our product through a deep understanding of the ML and infra system, and provide a data-driven insight and scientific solution. ShyftLabs is a growing data product company that was founded in early 2020, and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Data Analysis and Research: Analyzing a large dataset with queries and scripts, extracting valuable signals out of noise, and producing actionable insights into how we could complete and improve a complex ML and bidding system Simulation and Modelling: Validating and quantifying the efficiency and performance gain from hypotheses through rigorous simulation and modelling Experimentation and Causal Inference: Developing a robust experiment design and metric framework, and providing reliable and unbiased insights for product and business decision making Basic Qualifications: Master's degree in a quantitative discipline or equivalent 3+ years minimum professional experience Distinctive problem-solving skills, good at articulating product questions, pulling data from large datasets and using statistics to arrive at a recommendation Excellent verbal and written communication skills, with the ability to present information and analysis results effectively Ability to build positive relationships within ShyftLabs and with our stakeholders, and work effectively with cross-functional partners in a global company Statistics: Must have strong knowledge and experience in experimental design, hypothesis testing, and various statistical analysis techniques such as regression or linear models Machine Learning: Must have a deep understanding of ML algorithms (i.e., deep learning, random forest, gradient boosted trees, k-means clustering, etc.) and their development, validation, and evaluation Programming: Experience with Python, R, or other scripting language, and database language (e.g. SQL) or data manipulation (e.g. Pandas) We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 4 days ago
0 years
0 Lacs
Telangana, India
Remote
Location: UK / Europe (Remote or Hybrid) Looking for a highly analytical and technically skilled Optimization Specialist to build and scale mathematical models and process optimization. Candidates must show strong grounding in operations research, simulation, and/or machine learning. Design and implement optimization models (e.g., LP, MIP, MILP) for real-world scheduling, routing, and planning scenarios. Build simulation models to evaluate performance under uncertainty (e.g., disruptions, variable demand). Collaborate with data scientists, software engineers, and business analysts to refine problem definitions and translate them into quantitative models. Use AI/ML algorithms (forecasting, clustering, classification) to drive predictive optimization workflows. Conduct what-if analyses and sensitivity testing to support decision-making. Support research and pilot initiatives involving Quantum-Inspired Optimization (QUBO, hybrid models). Present findings and model performance to stakeholders with clear, concise visualizations and documentation. Strong knowledge of Linear Programming (LP), Mixed Integer Programming (MIP/MILP), Constraint Programming. Formulating and solving large-scale combinatorial problems Hands-on experience with Optimization libraries like Pyomo, PuLP, Google OR-Tools Solvers like CPLEX, Gurobi, GLPK, or CBC Programming proficiency in Python, R, or MATLAB Simulation expertise in SimPy, AnyLogic, Arena, or equivalent tools for discrete-event or agent-based simulation Applied machine learning knowledge Forecasting models, clustering, and model evaluation metrics (e.g., MAPE, RMSE) Clear communication and documentation skills to present models and insights to non-technical audiences Educational background PhD in Operations Research, Industrial Engineering, Applied Mathematics, Computer Science, or related field
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior IT Architect Engineer (Vmware) This is a hands-on role for a proactive and adaptable Senior IT Architect Engineer to blend daily support with project delivery. You'll resolve complex issues, drive architectural initiatives, and champion best practices with a focus on ownership and independent problem-solving. What You'll Do: Provide Tier 2/3 support for core infrastructure (Windows, VMware, Linux). Own technical issues from incident to resolution. Deliver solutions aligned with architectural designs and project needs. Recommend and implement infrastructure improvements. Support backup, recovery, and Disaster Recovery operations. Collaborate on new infrastructure and cloud platform rollouts. Document systems, processes, and configurations. Explore and implement emerging technologies. Take initiative and contribute to a culture of mentorship and shared learning. Technologies You'll Use: Virtualization: VMware vSphere, ESXi clustering, vCenter, NSX; AWS, GCP, Azure. Operating Systems: Windows Server, Linux (RHEL/Ubuntu). Identity & Access: Active Directory, Okta, Azure AD. Backup & Recovery: Veeam (configuration, policy, restore, DR). Scripting & Automation: PowerShell, Bash, Puppet. What You Need: 5+ years in IT infrastructure engineering. Strong hands-on expertise in VMware vSphere, ESXi clustering, vCenter, NSX, and Linux. Solid troubleshooting across Windows, Linux, VMware, and automated environments. Good working knowledge of Active Directory, DNS, and Group Policy. Proficiency with Veeam (backup policy design, recovery, DR). Strong scripting/automation experience (PowerShell, Bash, or Puppet). Detail-oriented with strong documentation skills. Self-motivated, independent, and effective as both a mentor and learner.
Posted 4 days ago
4.0 - 7.0 years
7 - 11 Lacs
Pune
Work from Office
Good experience in Oracle database migration, management , architecting discovery , patching/upgrade and handling very large databases.PostgreSQL Database migration , patching , management oStrong knowledge in installing/managing RAC , Cloud DB management and migration , clustering, configuring log shipping and mirroring., PostgreSQL HA o Comfortable in setting up and managing all the backup and recovery strategies. oGood experience in performance tuning and troubleshooting. oExcellent communication skills and customer-support focused attitude oKnowledge of Multiple database technology and cloud databases are plus.oStrong knowledge on ITIL practices and ITSM tools.
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 days ago
6.0 - 8.0 years
3 - 7 Lacs
Bhiwani
Work from Office
We are looking for a skilled Senior Analyst to join our team at eClerx Services Ltd., with 6-8 years of experience in the IT Services & Consulting industry. The ideal candidate will have a strong background in analysis and problem-solving, with excellent communication skills. Roles and Responsibility Conduct thorough analysis of complex data sets to identify trends and patterns. Develop and implement effective analytical processes to drive business growth. Collaborate with cross-functional teams to provide insights and recommendations. Design and maintain databases and systems to support business intelligence initiatives. Develop and deliver reports and presentations to stakeholders. Stay up-to-date with industry trends and emerging technologies. Job Requirements Strong understanding of analytical principles and methodologies. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Proficiency in analytical tools and software. Strong problem-solving and critical thinking skills. Ability to collaborate effectively with others.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough