Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Big Data Analytics & Engineering Overview Job Title: Sr. Analyst, Data Engineering, Value Quantification Team (Based in Pune, India) About Mastercard Mastercard is a global technology leader in the payments industry, committed to powering an inclusive, digital economy that benefits everyone, everywhere. By leveraging secure data, cutting-edge technology, and innovative solutions, we empower individuals, financial institutions, governments, and businesses to achieve their potential. Our culture is driven by our Decency Quotient (DQ), ensuring inclusivity, respect, and integrity guide everything we do. Operating across 210+ countries and territories, Mastercard is dedicated to building a sustainable world with priceless opportunities for all. Position Overview This is a techno-functional position that combines strong technical skills with a deep understanding of business needs and requirements with 5-7 years or experience. The role focuses on developing and maintaining advanced data engineering solutions for pre-sales value quantification within the Services business unit. As a Sr. Analyst, you will be responsible for creating and optimizing data pipelines, managing large datasets, and ensuring the integrity and accessibility of data to support Mastercard’s internal teams in quantifying the value of services, enhancing customer engagement, and driving business outcomes. The role requires close collaboration across teams to ensure data solutions meet business needs and deliver measurable impact. Role Responsibilities Data Engineering & Pipeline Development: Develop and maintain robust data pipelines to support the value quantification process. Utilize tools such as Apache NiFi, Azure Data Factory, Pentaho, Talend, SSIS, and Alteryx to ensure efficient data integration and transformation. Data Management and Analysis: Manage and analyze large datasets using SQL, Hadoop, and other database management systems. Perform data extraction, transformation, and loading (ETL) to support value quantification efforts. Advanced Analytics Integration: Use advanced analytics techniques, including machine learning algorithms, to enhance data processing and generate actionable insights. Leverage programming languages such as Python (Pandas, NumPy, PySpark) and Impala for data analysis and model development. Business Intelligence and Reporting: Utilize business intelligence platforms such as Tableau and Power BI to create insightful dashboards and reports that communicate the value of services. Generate actionable insights from data to inform strategic decisions and provide clear, data-backed recommendations. Cross-Functional Collaboration & Stakeholder Engagement: Collaborate with Sales, Marketing, Consulting, Product, and other internal teams to understand business needs and ensure successful data solution development and deployment. Communicate insights and data value through compelling presentations and dashboards to senior leadership and internal teams, ensuring tool adoption and usage. All About You Data Engineering Expertise: Proficiency in data engineering tools and techniques to develop and maintain data pipelines. Experience with data integration tools such as Apache NiFi, Azure Data Factory, Pentaho, Talend, SSIS, and Alteryx. Advanced SQL Skills: Strong skills in SQL for querying and managing large datasets. Experience with database management systems and data warehousing solutions. Programming Proficiency: Knowledge of programming languages such as Python (Pandas, NumPy, PySpark) and Impala for data analysis and model development. Business Intelligence and Reporting: Experience in creating insightful dashboards and reports using business intelligence platforms such as Tableau and Power BI. Statistical Analysis: Ability to perform statistical analysis to identify trends, correlations, and insights that support strategic decision-making. Cross-Functional Collaboration: Strong collaboration skills to work effectively with Sales, Marketing, Consulting, Product, and other internal teams to understand business needs and ensure successful data solution development and deployment. Communication and Presentation: Excellent communication skills to convey insights and data value through compelling presentations and dashboards to senior leadership and internal teams. Execution Focus: A results-driven mindset with the ability to balance strategic vision with tactical execution, ensuring that data solutions are delivered on time and create measurable business value. Education Bachelor’s degree in Data Science, Computer Science, Business Analytics, Economics, Finance, or a related field. Advanced degrees or certifications in analytics, data science, AI/ML, or an MBA are preferred. Why Us? At Mastercard, you’ll have the opportunity to shape the future of internal operations by leading the development of data engineering solutions that empower teams across the organization. Join us to make a meaningful impact, drive business outcomes, and help Mastercard’s internal teams create better customer engagement strategies through innovative value-based ROI narratives. Location: Gurgaon/Pune, India Employment Type: Full-Time Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 days ago
2.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description About the Role As a Software Developer you will be responsible for transforming the design and the product vision into working products and will directly influence how users will interact with the system. You will design, develop, ship, and maintain features with guidance from more experienced engineers.You will work hard to stay on top of fast changing technology and invest a lot of energy in creating a clean interface with great user experience. Qualifications Responsibilities Complete individual tasks without much guidance. Deliver good quality code, test, documentation. Actively participate in code reviews and scrum activities. React well on constructive feedback. Unblock yourself or with the help of another engineer. Work closely with product managers and other stakeholders to gather requirements and translate them into technical specifications. Continuously learn and stay updated with emerging technologies and industry trends. Contribute to the overall improvement of the team's processes and practices. Added responsibilities, before you level up to the next role: You work on projects delivering whole features (Full Stack). ORBe a subject matter expert in your respective area (Frontend, Backend)You own the technical specification, collaborate with other teams to deliver. You are mostly independent. You help your colleagues through code reviews or providing constructive feedback on technical decisions. Key Skills 2+ years of experience as a front-end developer building modern JavaScript-based web applications. Experience as a Developer on a cross-functional agile team preferred.Bachelor’s degree in Computer Science, MIS, or Engineering. Technical Skills Skilled in using frontend frameworks like React, Angular, and others. Skills in developing using python (django, numpy, pandas, scipy etc) is an added advantage. Expert in HTML, CSS/LESS/SCSS, JavaScript, responsive design. Proficient in native or web-to-native mobile development tools and frameworks. Strong understanding of RESTful APIs and their practical application. Ability to write effective unit, integration, and end-user automation tests. Experience with Python is an added advantage. Mindset and Attributes Strong communication skills with ability to communicate complex technical concepts and align organization on decisions. Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply. Utilizes team collaboration to create innovative solutions efficiently Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. Job Title: Cuda Developer Location: Pan India Experience: 5+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills ●Hands-on experience with CUDA/C++ and Python Developer . ●5+ years of overall work experience, with at least 3 years of relevant experience in Python and 2+ years in CUDA/C++ . ● Strong hands-on experience with Python, especially in scientific computing using PyTorch and NumPy. ● Solid understanding of CUDA programming concepts and C++ fundamentals. ● Demonstrated ability to analyze CUDA kernels and accurately reproduce them in Python. ● Familiarity with GPU computation, parallelism, and performance-aware coding practices. ● Strong debugging skills and attention to numerical consistency when porting logic across languages. ● Experience evaluating AI-generated code or participating in LLM tuning is a plus. Responsibilities Writing clean, high-quality, high-performance, maintainable code Develop and support software including applications, database integration, interfaces, and new functionality enhancements Coordinate cross-functionally to insure project meets business objectives and compliance standards Support test and deployment of new products and features Participate in code reviews. Qualifications Bachelor's degree in Computer Science (or related field)
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You have over 5 years of experience in building production-grade Neural Network models using Computer Vision or Natural Language Processing techniques. You possess a strong understanding of various machine learning techniques and algorithms, including k-NN, Naive Bayes, SVM, Decision Forests, and Neural Networks. Experience with Deep Learning frameworks like TensorFlow, PyTorch, and MxNet is part of your skillset. Proficiency in common data science toolkits such as R, Sklearn, NumPy, MatLab, and MLib is highly desirable. Additionally, you have solid applied statistics skills encompassing distributions, statistical testing, and regression. Your expertise extends to using query languages like SQL, Hive, Pig, and NoSQL databases. In this role, you will collaborate with Product Managers, Architects, and Engineering Leadership to conceptualize, strategize, and develop new products focused on AI/ML initiatives. You will be responsible for developing, driving, and executing the long-term vision and strategy for the Data Science team by engaging with multiple teams and stakeholders across the organization. Your tasks will involve architecting, designing, and implementing large-scale machine learning systems. Specifically, you will develop Neural Network models for information extraction from mortgage documents using Computer Vision and NLP techniques. Ad-hoc analysis and clear presentation of results to various audiences and key stakeholders will be part of your routine. You will also design experiments, test hypotheses, and build models while conducting advanced data analysis and highly complex algorithm designs. Applying advanced statistical, predictive, and machine learning modeling techniques to enhance multiple real-time decision systems will be a key aspect of your role. You will collaborate with development teams to deploy models in the production environment to support ML-driven product features. Furthermore, you will define business-specific performance metrics to assess model effectiveness and continuously monitor and enhance these metrics over time for models in a production environment. To qualify for this position, you should hold an M.S. in mathematics, statistics, computer science, or a related field, with a Ph.D. degree being preferred. You must have over 5 years of relevant quantitative and qualitative research and analytics experience. Excellent communication skills and the ability to effectively convey complex topics to a diverse audience are essential attributes for this role.,
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Python AI/ML Developer with 4-5 years of experience, your main responsibility will be to design, develop, and maintain Python applications while ensuring code quality, efficiency, and scalability. You will collaborate with cross-functional teams to understand project requirements and deliver solutions that align with business objectives. Implementing AI/ML algorithms and models to solve complex problems and extract valuable insights from data will be a key part of your role. Additionally, you will be developing and maintaining RESTful APIs to integrate Python applications with other systems. It is essential to stay updated with the latest trends and technologies in Python development and AI/ML. To excel in this role, you should have a strong proficiency in Python programming, including object-oriented programming and design patterns. Experience with popular Python libraries and frameworks such as NumPy, Pandas, Scikit-learn, TensorFlow, and PyTorch is required. Knowledge of AI/ML concepts, algorithms, and techniques, including supervised and unsupervised learning, is crucial. Experience working with data pipelines and ETL processes is beneficial, and hands-on experience with chatbot applications is necessary. Excellent problem-solving and analytical skills are essential, along with the ability to work independently and as part of a team. Strong communication and documentation skills are also important. Preferred qualifications include experience with cloud platforms such as AWS, GCP, or Azure, knowledge of natural language processing (NLP) or computer vision, experience with machine learning deployment and operationalization, and contributions to open-source Python projects. Stay updated with the latest advancements in technology to enhance your skills and contribute effectively to the development of innovative solutions.,
Posted 4 days ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Full Stack Developer, you will be responsible for developing full stack applications by writing clean and efficient code. Your role will involve automating tasks using appropriate tools and scripting, as well as reviewing and debugging code. It will be essential for you to document development phases and closely monitor systems to ensure smooth operation. To excel in this role, you should have at least 2+ years of experience working with Python, Machine Learning, and related frameworks. You should be proficient in creating RESTful APIs using Python Django and have the ability to seamlessly integrate backend APIs with frontend applications. Additionally, you should possess knowledge of developing applications based on machine learning, deep learning, Gen AI, and LLM. Your expertise should include a strong command over programming languages like Python, SQL, and Big Query. Hands-on experience with ML tools such as Numpy, SciPy, Pandas, Scikit-Learn, Tensorflow, Langchain, and Vector Base will be highly advantageous. You should also demonstrate excellent problem-solving skills and the capability to work both independently and collaboratively within a team environment. If you are passionate about developing innovative applications and have a keen interest in the field of Machine Learning, this role offers you the opportunity to showcase your skills and contribute to cutting-edge projects.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for leading the development of Python-based applications in Pune with 6-8 years of experience. Your role will involve designing and implementing complex application features and backend services to ensure high performance and scalability. Collaborating with stakeholders to define system requirements and specifications will be essential, along with optimizing performance, scalability, and security of the applications. Managing data integration and handling large datasets efficiently will also be part of your responsibilities. As a lead developer, you will conduct code reviews, contribute to improving coding practices, and mentor junior developers. Developing and maintaining automated unit and integration tests will be crucial for ensuring the quality of the applications. You will also collaborate with DevOps teams for continuous integration/continuous deployment (CI/CD) and cloud deployments. To excel in this role, you should have strong experience in Python and related libraries such as Pandas, NumPy, and Celery. Extensive experience with web frameworks like Django or Flask is required, along with in-depth knowledge of RESTful API design and development. Experience with databases such as PostgreSQL, MySQL, MongoDB, or others is essential. Proficiency with cloud platforms like AWS, Azure, or Google Cloud is necessary, as well as hands-on experience with Docker, Kubernetes, and microservices architecture. Familiarity with asynchronous programming and task queues (e.g., Celery, RabbitMQ), a strong understanding of software design patterns and architecture, and the ability to lead technical discussions and make architectural decisions are also key skills required for this role. Excellent problem-solving skills and attention to detail will be essential for success in this position.,
Posted 4 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have a Bachelors degree in Computer Science, Electrical Engineering, or a related field. A strong understanding of computer vision fundamentals including image processing and feature extraction is required. Experience with machine learning frameworks like OpenCV, TensorFlow, PyTorch, or Keras is essential. Proficiency in Python and relevant libraries such as NumPy, SciPy, and Matplotlib is expected. Basic knowledge of deep learning models like CNNs, RNNs, and transformers is necessary. Familiarity with model architectures such as ResNet, YOLO, SSD, and U-Net is preferred. You should have an understanding of algorithms, data structures, and relevant mathematical concepts. Knowledge of cloud platforms like AWS and GCP, as well as edge computing frameworks, is a plus. Experience with GPU computing using CUDA and cuDNN is beneficial. Previous hands-on experience in computer vision projects or open-source contributions would be desirable for this role.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Title : Data Scientist Experience : 6 to 10 Years Location : Noida, Bangalore, Pune Employment Type : Full-time Job Summary We are seeking a highly skilled and experienced Data Scientist with a strong background in Natural Language Processing (NLP), Generative AI, and Large Language Models (LLMs). The ideal candidate will be proficient in Python and have hands-on experience working with both Google Cloud Platform (GCP) and Amazon Web Services (AWS). You will play a key role in designing, developing, and deploying AI-driven solutions to solve complex business problems. Key Responsibilities Design and implement NLP and Generative AI models for use cases such as chatbots, text summarization, question answering, and information extraction. Fine-tune and deploy Large Language Models (LLMs) using frameworks such as Hugging Face Transformers or LangChain. Conduct experiments, evaluate model performance, and implement improvements for production-scale solutions. Collaborate with cross-functional teams including product managers, data engineers, and ML engineers. Deploy and manage ML models on cloud platforms (GCP and AWS), using services such as Vertex AI, SageMaker, Lambda, Cloud Functions, etc. Build and maintain ML pipelines for training, validation, and deployment using CI/CD practices. Communicate complex technical findings in a clear and concise manner to both technical and non-technical stakeholders. Required Skills Strong proficiency in Python and common data science/ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Proven experience in Natural Language Processing (NLP) techniques (NER, sentiment analysis, embeddings, topic modeling, etc.). Hands-on experience with Generative AI and LLMs (e.g., GPT, BERT, T5, LLaMA, Claude, Gemini). Experience with LLMOps, prompt engineering, and fine-tuning pre-trained language models. Experience with GCP (BigQuery, Vertex AI, Cloud Functions, etc.) and/or AWS (SageMaker, S3, Lambda, etc.). Familiarity with containerization (Docker), orchestration (Kubernetes), and model deployment best practices (ref:hirist.tech)
Posted 4 days ago
0.0 - 3.0 years
0 Lacs
bhubaneswar
On-site
You are a qualified Python Trainer joining our institution in Bhubaneshwar, focused on teaching B.Tech, MCA, and M.Sc-IT courses. Your primary responsibility is delivering engaging lectures on Python and related technologies such as data handling, object-oriented programming, web frameworks, and emerging Python applications in automation and data science. Your responsibilities include: - Teaching and Lecturing: Deliver comprehensive lectures on topics like Core Python, Object-Oriented Programming, Data Structures, Web Development with Django/Flask, Python for Data Analysis, and Scripting for Automation. Ensure students gain theoretical knowledge and practical programming skills. - Research and Innovation: Conduct research on Python applications, stay updated with the latest libraries and frameworks, and guide students on research projects. - Mentorship: Support students in developing Python projects, preparing for internships, and applying theoretical knowledge in practical scenarios. - Collaboration: Work with faculty and industry professionals to enhance curriculum delivery and establish partnerships for guest lectures, hackathons, and internships. - Professional Development: Stay updated with emerging Python trends like AI, ML, and data engineering through workshops, webinars, and technical events. Qualifications: - Masters degree in Information Technology, Computer Science, or related field with 6 months to 1 year of experience. - Expertise in Python and prior teaching or mentoring experience is desirable. Skills and Competencies: - Strong command over Core and Advanced Python programming. - Hands-on experience with frameworks like Django, Flask, or FastAPI. - Familiarity with tools such as Pandas, NumPy, Matplotlib, and automation libraries. - Excellent communication, presentation, and problem-solving skills. - Ability to motivate and inspire students towards coding excellence. Benefits include a competitive salary, commuter assistance, leave encashment, paid time off, and Provident Fund. You'll work in a dynamic educational environment, shaping the next generation of Python developers and contributing to real-world problem-solving through teaching. This is a full-time position requiring a day shift schedule. Proficiency in English is required, and the work location is in person at Bhubaneshwar.,
Posted 4 days ago
2.0 - 13.0 years
0 Lacs
kolkata, west bengal
On-site
You are a highly motivated and technically strong Data Scientist / MLOps Engineer with 13 years of experience, looking to join our growing AI & ML team in Kolkata. Your role will involve designing, developing, and deploying scalable machine learning solutions with a focus on operational excellence, data engineering, and GenAI integration. Your key responsibilities will include building and maintaining scalable machine learning pipelines using Python, deploying and monitoring models using MLFlow and MLOps stacks, designing and implementing data workflows with PySpark, and leveraging standard data science libraries for model development. You will also work with GenAI technologies such as Azure OpenAI and collaborate with cross-functional teams to meet business objectives. To excel in this role, you must have expertise in Python for data science and backend development, solid experience with PostgreSQL and MSSQL databases, hands-on experience with data science packages like Scikit-Learn, Pandas, Numpy, and Matplotlib, as well as experience with Databricks, MLFlow, and Azure. A strong understanding of MLOps frameworks and deployment automation is essential, along with prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI. Preferred qualifications include experience in the Finance, Legal, or Regulatory domain, working knowledge of clustering algorithms and forecasting techniques, and previous experience in developing reusable AI frameworks or productized ML solutions. You should hold a B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. By joining us, you will work on cutting-edge ML and GenAI projects, be part of a collaborative and forward-thinking team, and have opportunities for rapid growth and technical leadership. This is a full-time position based in Kolkata, with benefits including leave encashment, paid sick time, paid time off, Provident Fund, and work from home options. If you meet the required qualifications and are excited about the prospect of working in a dynamic AI & ML environment, we encourage you to apply before the application deadline of 02/08/2025. The expected start date for this position is 04/08/2025.,
Posted 4 days ago
15.0 - 19.0 years
0 Lacs
thane, maharashtra
On-site
The role of VP Data Scientist in Thane involves driving analytics opportunities in the BFSI sector, overseeing project delivery, managing teams and stakeholders. The ideal candidate should possess a robust background in advanced analytics, with a history of leading impactful analytics solutions and programs for Indian clients. Additionally, the candidate must have experience in establishing and supervising large-scale Business Intelligence units. With over 15 years of relevant experience, the right candidate should showcase expertise in various areas including data processing and data science libraries of Python such as NumPy, Pandas, Scikit learn. They should also demonstrate proficiency in handling massive datasets through tools like Apache Spark, Vaex, Dask, and leveraging them for machine learning algorithms development. Moreover, the candidate is expected to be well-versed in analytical models like promotion optimization, NLP, Cluster Analysis, Segmentation, Neural network models, Logistic Regression, ANN based model, LSTM, Transformers, and more. Familiarity with cloud-based analytics platforms like Azure, GCP, or AWS is crucial. Experience in automating model training, deployment, and monitoring using ML-Ops pipelines is preferred. Key responsibilities for this role include stakeholder management, project planning, scope definition, and project methodology explanation to stakeholders. The VP Data Scientist will lead end-to-end project deliverables, ensure adherence to SOPs, and provide training to junior team members. Managing a team of 20+ data scientists, overseeing project delivery, and addressing business challenges are essential aspects of this role. The ideal candidate should hold a degree in BE/B.Tech./MCA/M.Tech. and demonstrate deep expertise in Python, SAS. They must possess a strong track record of successful project management, stakeholder engagement, and team leadership within the BFSI sector.,
Posted 5 days ago
8.0 - 15.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Technology Lead Analyst position is a senior-level role that involves implementing new or updated application systems and programs in collaboration with the Technology team. Your main responsibility will be to lead applications systems analysis and programming activities. You will work closely with various management teams to ensure the integration of functions to achieve goals and to identify and define necessary system enhancements for deploying new products and process improvements. You will tackle high-impact problems/projects by evaluating complex business processes, system processes, and industry standards. Your expertise in applications programming will be crucial in ensuring that application design aligns with the overall architecture blueprint. As a Technology Lead Analyst, you will need to have a deep understanding of system flow and develop coding, testing, debugging, and implementation standards. Additionally, you will need to develop a comprehensive knowledge of how different areas of business integrate to achieve business objectives. Your role will also involve providing in-depth analysis to define issues and develop innovative solutions. You will be expected to serve as an advisor or coach to mid-level developers and analysts, assigning work as needed. It is essential that you appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup and its assets by ensuring compliance with laws, rules, and regulations. To qualify for this role, you should have 9-15 years of relevant experience in Apps Development or systems analysis. Additionally, you should be a highly skilled senior core python developer with extensive experience in software building and platform engineering. Proficiency in core python concepts and libraries such as pandas, numpy, and scipy is required, along with a solid understanding of OOPs concepts and design patterns. Strong computer science fundamentals and experience with Unix-based operating systems are also necessary. Having hands-on experience in writing SQL queries, familiarity with source code management tools, and experience in the banking domain are considered advantageous. A CFA/FRM certification is also a plus. Your educational background should include a Bachelor's degree or equivalent experience, with a preference for a Master's degree. In summary, this role requires a Python developer with significant experience in developing python applications for Risk and Finance analytical purposes. Your expertise in core python, python libraries, and databases, along with your analytical and logical skills, will be instrumental in delivering high-performance applications that involve data computation, analysis, and processing. Please note that this job description provides a general overview of the responsibilities and qualifications. Other duties may be assigned as necessary.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Engineer at our company, your primary responsibility will be the development and maintenance of scalable and efficient data pipelines and ETL processes using Python and related technologies. You will play a crucial role in optimizing the performance of these pipelines and queries to handle large volumes of data and improve processing times. Collaboration is key in this role, as you will closely work with our team of data scientists and engineers at Matrix Space. To excel in this position, you should have 2-5 years of experience in data engineering or a related field with a strong focus on Python. Proficiency in Python programming is a must, including knowledge of libraries such as Pandas, NumPy, and SQL Alchemy. Additionally, hands-on experience with data engineering tools and frameworks like Apache Airflow, Luigi, or similar is highly desirable. A solid grasp of SQL and experience with relational databases such as PostgreSQL and MySQL will be beneficial. In addition to technical skills, we value certain soft skills in our team members. Problem-solving abilities, the capacity to work both independently and collaboratively, and effective communication skills are essential. You should be able to articulate technical concepts to non-technical stakeholders and demonstrate a proven track record of completing tasks efficiently. If you are an immediate joiner and can commence within a week, we encourage you to apply for this position. Join our team and be part of an exciting journey in data engineering where your skills and expertise will be valued and put to good use.,
Posted 5 days ago
0.0 - 4.0 years
0 - 0 Lacs
bhopal, madhya pradesh
On-site
You should have proficiency in Python with skills in file handling, RestfulAPIs, List, Tuple, etc. Additionally, experience in ML/Data Science using NumPy, Pandas, EDA, TensorFlow, Keras, Matplotlib, etc. is required. Familiarity with Large Language Models (LLMs) is also preferred. Good communication skills are a must for this role. The minimum duration for this position is 6 months. The stipend offered is based on the candidate's experience (DOE) ranging from 5K to 25K. This is a remote job opportunity and the selected candidates will be working from home. Please note that this opportunity is for deserving candidates only.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
punjab
On-site
You should have at least 2 years of experience in Python development and a solid understanding of AI/ML concepts with hands-on experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Additionally, you should be experienced in working with RESTful APIs, web frameworks such as Flask or Django, and data manipulation libraries like Pandas and NumPy. Proficiency in version control using Git is required, along with good problem-solving and communication skills. Your responsibilities will include developing, testing, and maintaining Python-based applications and APIs, integrating AI/ML models into production-grade systems, collaborating with data scientists, engineers, and product teams to build scalable solutions, designing and implementing data pipelines and automation tools, optimizing code for performance and scalability, and staying updated on industry trends and best practices in Python development and AI. Preferred qualifications for this position include experience deploying machine learning models in production, knowledge of cloud platforms such as AWS, GCP, or Azure, familiarity with containerization tools like Docker or Kubernetes, and a background in computer science, mathematics, or a related field. Sunfocus Solutions Private Limited, based in India, is a business solutions IT company with a team of over 100 technology experts providing world-class IT solutions to clients. Sunfocus Solutions is a Top Rated PLUS Agency on Upwork and delivers full mobile and web solutions for startups, small businesses, and large enterprises globally. The benefits offered by Sunfocus include cell phone reimbursement, commuter assistance, health insurance, internet reimbursement, leave encashment, life insurance, and work-from-home options. The education requirement for this position is a Bachelor's degree, and the job type is full-time and permanent. In addition to a competitive salary, the company provides benefits such as cell phone reimbursement, commuter assistance, health insurance, internet reimbursement, leave encashment, life insurance, and work-from-home options. The work schedule includes day and morning shifts, and bonuses like joining bonus, performance bonus, and yearly bonus are also offered. The ideal candidate for this position should be able to commute or relocate to Mohali, Punjab. Previous experience of 2 years in Python with AI knowledge is preferred for this role. The work location is in person at 1st floor, Plot No: C-127, Phase-8, Industrial Area, Sahibzada Ajit Singh Nagar, Punjab 160072.,
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As an Enterprise AI Platform Solution Architect, your primary responsibility will be to ideate and formulate a pragmatic blueprint using Deep learning for the development of AI solutions. You will work closely with a cross-functional team of talented engineers to design and create advanced and efficient AI-based solutions for business problems. Collaboration with engineers, researchers, and data specialists will be crucial in implementing best practices in coding and maintaining code repositories. You will be required to maintain the quality and responsiveness of AI platforms, ensuring that they meet the requirements derived from client meetings and workshops. Additionally, you will contribute to writing comprehensive proposals and work within an agile environment to deliver assigned tasks within specified timelines. Adhering to software development best practices and coding standards will be essential to improve quality and productivity. In this role, you will guide, coach, and lead a team of engineers specializing in AI, full-stack development, QA automation, and DevOps. Your expertise should include a strong educational background in Computer Science or a related field, along with 8-12 years of experience in live industry projects at an enterprise level. Domain expertise in areas such as NLP, Computer Vision, or Cyber Security (Data Science) will be beneficial. Proficiency in Python Programming, TensorFlow, Sci-kit learn, Keras, and other relevant libraries for AI & Machine Learning is required. You should have a strong ability to manipulate data using Pandas, Numpy, and other Python libraries, supporting data munging and cleansing activities. Experience in Agile methodology, CI-CD patterns, and working with Microservices, APIs, and Docker containers is essential. Your skills should also include hands-on experience with relational databases, writing SQL queries, and understanding enterprise application architecture. Proficiency in tools and technologies such as Python, TensorFlow, PyTorch, Django/Flask, MySQL, Git, Bitbucket, Jira, Confluence, and cloud platforms like Google Cloud, AWS, and Azure will be valuable for this role.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
Offshore Software Solutions is looking for a Data Scientist with 3-5 years of experience in analyzing health data from devices like smartwatches and fitness trackers. The role involves developing custom machine learning models using TensorFlow to derive actionable insights for improving health monitoring and user well-being. Your responsibilities will include analyzing health data from wearables, developing and deploying machine learning models, collaborating with cross-functional teams, preprocessing data, ensuring privacy standards compliance, optimizing models for accuracy, and presenting findings to stakeholders. Additionally, you will stay updated on the latest machine learning research related to health data from wearable devices. Requirements for this role include a Bachelor's or Masters degree in Data Science or related field, 3+ years of data analysis experience, expertise in TensorFlow, knowledge of deep learning and neural networks, proficiency in Python and data science libraries, familiarity with health data metrics, experience with data visualization tools and databases, and strong problem-solving skills. Preferred qualifications include working with wearable device APIs, understanding time-series analysis and anomaly detection, experience in edge computing, and good communication skills for effective collaboration. If you are passionate about leveraging data science to improve health monitoring and user well-being, and possess the required skills and qualifications, we encourage you to apply for this exciting opportunity at Offshore Software Solutions in Mohali, Punjab.,
Posted 5 days ago
4.0 years
0 Lacs
India
Remote
At SpicyChat, we’re on a mission to build the best uncensored roleplaying agent in the world , and we’re looking for a passionate Data Scientist to join our team. Whether you’re early in your data science career or growing into a mid-senior role, this is a unique opportunity to work hands-on with state-of-the-art LLMs in a fast-paced, supportive environment. Role Overview We’re looking for a Data Scientist (Junior to Mid-Senior level) who will support our LLM projects across the full data pipeline—from building clean datasets and dashboards to fine-tuning models and supporting cross-functional collaboration. You’ll work closely with ML engineers, product teams, and data annotation teams to bring AI solutions to life. What You’ll Be Doing ETL and Data Pipeline Development: Design and implement data extraction, transformation, and loading (ETL) pipelines. Work with structured and unstructured data from various sources. Data Preparation: Clean, label, and organize datasets for training and evaluating LLMs. Collaborate with annotation teams to ensure high data quality. Model Fine-Tuning & Evaluation: Support the fine-tuning of LLMs for specific use cases. Assist in model evaluation, prompt engineering, and error analysis. Dashboarding & Reporting: Create and maintain internal dashboards to track data quality, model performance, and annotation progress. Automate reporting workflows to help stakeholders stay informed. Team Coordination & Collaboration: Communicate effectively with ML engineers, product managers, and data annotators. Ensure that data science deliverables align with product and business goals. Research & Learning: Stay current with developments in LLMs, fine-tuning techniques, and the AI ecosystem. Share insights with the team and suggest improvements based on new findings. Qualifications Required: 1–4 years of experience in a data science, ML, or analytics role. Proficient in Python and data science libraries (Pandas, NumPy, scikit-learn). Experience with SQL and data visualization tools (e.g., Streamlit, Dash, Tableau, or similar). Familiarity with machine learning workflows and working with large datasets. Strong communication and organizational skills. Bonus Points For: Experience fine-tuning or evaluating large language models (e.g., OpenAI, Hugging Face, LLaMA, Mistral, etc.). Knowledge of prompt engineering or generative AI techniques. Exposure to tools like Weights & Biases, Airflow, or cloud platforms (AWS, GCP, Azure). Previous work with cross-functional or remote teams. Why Join NextDay AI? 🌍 Remote-first: Work from anywhere in the world. ⏰ Flexible hours: Create a schedule that fits your life. 🌴 Unlimited leave: Take the time you need to rest and recharge. 🚀 Hands-on with LLMs: Get practical experience with cutting-edge AI systems. 🤝 Collaborative culture: Join a supportive, ambitious team working on real-world impact. 🌟 Mission-driven: A chance to be part of an exciting mission and an amazing team. Ready to join us in creating the ultimate uncensored roleplaying agent? Send us your resume along with some details on your coolest projects. We’re excited to see what you’ve been working on!
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough