Jobs
Interviews

33 Jupyter Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

3 - 7 Lacs

Hyderabad, Telangana, India

On-site

Job Summary Having meetings with team members regarding projects. Collecting and interpreting data. Automating and integrating processes. Researching solutions to overcome data analytics challenges. Developing complex mathematical models that integrate business rules and requirements. Creating machine learning models. Communicating and meeting with engineers, IT teams, and other interested parties. Sharing complex ideas verbally and visually in an understandable manner with non-technical stakeholders. Experience in technologies like Python, Jupyter, Machine Learning Algorithms, SQL, Data Visualization, Statistical or Mathematical software

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an experienced and passionate AI/ML Trainer, you will be responsible for delivering hands-on training for a structured Artificial Intelligence & Machine Learning program. This contract-based opportunity is ideal for professionals with strong technical expertise and a flair for teaching. Your key responsibilities will include delivering in-depth classroom and lab-based training sessions on core AI/ML topics, designing, developing, and maintaining training material, assignments, and project modules, conducting doubt-clearing sessions, assessments, and live coding exercises, guiding students through capstone projects and real-world datasets, evaluating learner performance, and providing constructive feedback. It will also be your responsibility to stay updated with the latest industry trends, tools, and techniques in AI/ML. The topics you will cover during the training sessions include Python for Data Science, Statistics and Linear Algebra Basics, Machine Learning (Supervised, Unsupervised, Ensemble Techniques), Deep Learning (ANN, CNN, RNN, etc.), Natural Language Processing (NLP), Model Deployment (Flask, Streamlit, etc.), and hands-on experience with libraries like NumPy, Pandas, Scikit-learn, TensorFlow/Keras, OpenCV, and tools such as Jupyter, Git, Google Colab, and VS Code. To be eligible for this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, along with at least 24 years of relevant industry/training experience in AI/ML. Strong communication and presentation skills are essential, as well as the ability to mentor and engage learners of varying skill levels. Prior training or teaching experience, whether academic or corporate, is preferred. Your skills should include expertise in ML, scikit-learn, TensorFlow, Jupyter, Natural Language Processing, OpenCV, Python for Data Science, Google Colab, Deep Learning, Model Deployment, NumPy, Keras, Linear Algebra, Statistics, Pandas, Git, VS Code, and a good understanding of contractual, artificial intelligence, and machine learning concepts.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

As a Data Science Trainer, you will play a crucial role in delivering high-quality instruction in data science, machine learning, and AI. Your responsibilities will include designing curriculum, conducting training sessions, and mentoring learners of different skill levels. You should possess strong theoretical knowledge coupled with hands-on experience in real-world data projects. Your key responsibilities will involve delivering engaging training sessions on various topics such as Python for Data Science, Statistics & Probability, Machine Learning & Deep Learning, Data Visualization, Data Manipulation, and Tools & Platforms. You will also be responsible for designing and updating training materials, providing one-on-one mentoring to learners, evaluating progress, and keeping the curriculum up-to-date with industry trends. To excel in this role, you are required to hold a Bachelors/Masters degree in Data Science, Computer Science, Statistics, or a related field with at least 5 years of experience in data science or related domains. Prior experience in teaching or mentoring will be advantageous. Proficiency in Python, machine learning algorithms, data processing libraries, along with excellent communication and presentation skills are essential. Your ability to explain complex concepts clearly and engagingly will be key to your success. In addition to your core responsibilities, you will participate in webinars, workshops, and community building activities, conduct code reviews, and support learners with debugging. Your dedication to continuous learning and staying updated with the latest industry technologies will be highly valued. This full-time role will require you to work in a morning shift at the designated in-person work location. If you are passionate about data science education and possess the necessary skills and experience, we look forward to receiving your application.,

Posted 1 week ago

Apply

8.0 - 12.0 years

4 - 7 Lacs

Gurgaon, Haryana, India

On-site

Responsibilities include: Act as the liaison between Product Owners and technical developers, translating new requirements into designs, development strategies, and implementation plans. Build, and implement a Data Lake by ingesting and integrating commercial data from diverse sources to support both descriptive and predictive analytics. Lead a cross-functional team of AWS developers, Snowflake developers, UI developers, and Testers, working with technologies like Snowflake PL/SQL, StreamSets, AWS Glue, Lambda, Airflow, shell scripting, and React on EBS. Ensure data integrity and reporting consistency by establishing optimal processes and procedures for teams to follow. Conduct code reviews for Python Lambda and Snowflake stored procedures, optimizing them to adhere to industry best practices. Design and develop solutions that align with both business needs and central technology requirements. Lead estimation efforts, collaborating with architects for scoping and with business teams for prioritization. Validate and approve technical direction and implementation as part of the formal review process. Provide issue resolution support before, during, and after deployment phases. Balance the requirements of individual product teams with the overarching Salesforce program objectives. Facilitate team meetings, lead Scrum ceremonies, and engage with business stakeholders. Enhance and expand the technical knowledge base within the BSC Salesforce community. Basic Qualifications: Bachelor's degree with 5-7 years of experience leading Data Engineering teams. Over 6 years of expertise in writing advanced SQL and PL/SQL, specifically in Snowflake or Oracle. 5+ years of hands-on experience with various AWS services, particularly Lambda and CloudFormation. Proficient in core Python, including advanced techniques such as slicing and working with dictionaries. Expertise in Snowflake, focusing on query optimization, automation, data governance, security (encryption, access control), compliance (HIPAA, GDPR), and workflow automation (Streams, Tasks, and Materialized Views). Experience in at least three big data projects utilizing Snowflake and Snowpark. Skilled in data warehouse and data mart modeling. Proven ability to design efficient data models that optimize storage and retrieval, with experience securely sourcing external data from multiple sources. Strong background in addressing data scaling and product challenges, designing, building, and launching reliable data pipelines of varying complexity. Proficient in ETL tools such as IICS, StreamSets, Data Factory, etc. Familiar with major platforms like Salesforce, SAP, ServiceNow, Hana, and Zoom. Expertise in release management, with experience using GIT, Jenkins, or other CI/CD tools. Experience in consuming data from REST APIs using Python or Java, with the ability to parse JSON and XML data. Hands-on experience with reporting tools like Tableau, PowerBI, or SSRS. Strong written, verbal, and presentation skills, with the ability to collaborate effectively with team members. Familiar with Jira and other Atlassian tools for agile lifecycle management. Knowledge of quality assurance and documentation best practices in software development. Preferred Qualifications: AWS Platform Certifications (Developer, Solution Architect) SnowPro Core Certification Knowledge on ML & AI, Snowflake Cortex, Document AI Experience in Airflow, AWS Glue, Streamsets, Expertise in Jupyter notebook Experienced with front-end web development frameworks and hands-on development experience with react JS. Expertise with Agile software development methods and processes

Posted 1 week ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Gurugram

Hybrid

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Bengaluru, Karnataka, India

On-site

Job Description BASIC QUALIFICATIONS Bachelor's degree in Computer Science, Engineering, Mathematics or a related field or equivalent professional or military experience 8+ years of total software development experience and 5+ years of experience of Data platform implementation Hands-on experience in implementation and performance tuning of Kinesis, Kafka, Spark or similar implementations Hands on experience with AWS technology stack and AWS AI stack including AWS Sagemaker & MLOps. Experience in Python and python frameworks (Django, Flask, Bottle), via various IDEs like PyTorch, Jupyter, Java/.Net, and other open-source libraries, building and designing REST APIs, etc. DevOps / Deployment automation using Terraform, Jenkins Knowledge of software designpatterns/architecturelike Micro-services, Layered pattern, etc. Passionate teammate who understands and respects personal & cultural differences Ability to work under pressure and be highly adaptable Strong written and communications skills for collaboration with various teams and upper management Solid analytical skills, especially in area of translating business requirements into technical design with a continuous focus on aligning technical roadmap with the immediate and long-term Business strategy Able to adapt and embrace change and support business strategy and vision. PREFERRED QUALIFICATIONS Bachelors/Masters or PhD in Computer Science, Physics, Engineering or Math. Hands on experience working on large-scale data science/data analytics projects Experience Implementing AWS services in a variety of distributed computing, enterprise environments. Experience with at least one of the modern distributed Machine Learning and Deep Learning frameworks such as TensorFlow, PyTorch, MxNet Caffe, and Keras. Experience building large-scale machine-learning infrastructure that have been successfully delivered to customers. 3+ years experiences developing cloud software services and an understanding of design for scalability, performance, and reliability. Ability to prototype and evaluate applications and interaction methodologies. Responsibilities Delivers high quality software, on-time, following Broadridge SDLC processes. Works within and across teams to design, develop, test, implement, and support technical solutions across a full stack of development tools and technologies. Ensures technical & security best practices along with Broadridge standards are adhered to on continuous basis. Provides technical leadership to developers in a variety of duties including data design, coding, testing, technical design, development, and troubleshooting. Handles technical implementation, code quality and overall productivity of the development team Owns, communicates and sets expectations of day to day work of the developers (off-shore and on-shore) Plays a lead role in meetings between Business, QA, and Infrastructure teams to provide technical leadership/guidance and help coordinate to removeimpediments/roadblocks Provides estimates of all priority and non-priority projects along with recommended scope or schedule changes based on capacity and unforeseen challenges Identifies potential issues while staying focused on identified priorities. Assists in the hiring process to hire top talent and in the performance reviews of team members, identifying areas of improvement. Inspires, mentors and trains development team on modern technologies continuously. Works with senior leaders of the development team to architect solutions with technical vision, maintainability and total cost of ownership in mind.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Noida

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a highly skilled and experienced Senior Data Engineer to take charge of developing complex compliance and supervision models. Your expertise in cloud-based infrastructure, ETL pipeline development, and financial domains will be crucial in creating robust, scalable, and efficient solutions. As a Senior Data Engineer, your key responsibilities will include leading the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. You will design, build, and optimize scalable cloud infrastructure solutions, drawing on a minimum of 5 years of experience in cloud infrastructure. Creating, managing, and optimizing ETL pipelines using PySpark for large-scale data processing will also be a core part of your role. In addition, you will be responsible for building and maintaining CI/CD pipelines for deploying and maintaining cloud-based applications, performing detailed data analysis to deliver actionable insights, and collaborating closely with cross-functional teams to ensure alignment with business goals. Operating effectively in agile or hybrid agile environments and enhancing existing frameworks to support evolving business needs will be key aspects of your role. To qualify for this position, you must have a minimum of 5 years of experience with Python programming, 5+ years of experience in cloud infrastructure (particularly AWS), 3+ years of experience with PySpark (including usage with EMR or Glue Notebooks), and 3+ years of experience with Apache Airflow for workflow orchestration. A strong understanding of capital markets, financial systems, or prior experience in the financial domain is essential, along with proficiency in cloud-native technologies and frameworks. Furthermore, familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline, experience with notebooks for interactive development, excellent problem-solving skills, and strong communication and interpersonal skills are required for this role. The ability to thrive in a fast-paced, dynamic environment is also crucial. In return, you will receive standard company benefits. Join us at DATAECONOMY and be part of a fast-growing data & analytics company at the forefront of innovation in the industry.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Ahmedabad

Work from Office

Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Data Science Manager in the Research and Development (R&D) team at our organization, you will play a crucial role in driving innovation through advanced machine learning and AI algorithms. Your primary responsibility will involve conducting applied research, development, and validation of cutting-edge algorithms to address complex real-world problems on a large scale. You will collaborate closely with the product team to gain insights into business challenges and product objectives, enabling you to devise creative algorithmic solutions. Your role will entail creating prototypes and demonstrations to validate new ideas and transform research findings into practical innovations by collaborating with AI Engineers and software engineers. In addition, you will be responsible for formulating and executing research plans, carrying out experiments, documenting and consolidating results, and potentially publishing your work. It will also be essential for you to safeguard intellectual property resulting from R&D endeavors by working with relevant teams and external partners. Furthermore, part of your role will involve mentoring junior staff to ensure adherence to established procedures and collaborating with various stakeholders, academic/research partners, and fellow researchers to deliver tangible outcomes. To excel in this position, you are required to possess a strong foundation in computer science principles and proficient skills in analyzing and designing AI/Machine learning algorithms. Practical experience in several key areas such as supervised and unsupervised machine learning, reinforcement learning, deep learning, knowledge-based systems, evolutionary computing, probabilistic graphical models, among others, is essential. You should also be adept in at least one programming language and have hands-on experience in implementing AI/machine learning algorithms using Python or R. Familiarity with tools, frameworks, and libraries like Jupyter/Zeppelin, scikit-learn, matplotlib, pandas, Tensorflow, Keras, Apache Spark, etc., will be advantageous. Ideally, you should have at least 2-5 years of applied research experience in solving real-world problems using AI/Machine Learning techniques. Additionally, having a publication in a reputable conference or journal related to AI/Machine Learning or holding patents in the field would be beneficial. Experience in contributing to open-source projects within the AI/Machine Learning domain will be considered a strong asset. If you are excited about this challenging opportunity, please refer to the Job Code DSM_TVM for the position based in Trivandrum. For further details, feel free to reach out to us at recruitment@flytxt.com.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Hyderabad

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

tamil nadu

On-site

Expert in Python and SQL for data exploration and modeling. - Strong background in statistical modeling, segmentation, and A/B testing. - Experience with scikit-learn, pandas, Jupyter, and optional tools like Tableau, Looker, or dashboards. - Comfortable working with campaign goals (ROAS, CPA, sales lift) and developing actionable audience insights. Develops insights, audience scoring, and experiment frameworks to identify the highest-performing combinations of segments, channels, and strategies. Design audience insights pipelines using device/app/geo/time/segment data to surface high-performing cohorts. Build LTV, propensity, and uplift models to influence media planning and bidding strategies. Account for seasonal and vertical-specific trends to enrich insights. Define key metrics for optimization (e.g., ROAS, CPA, foot traffic lift). Work closely with traders and marketers to translate business goals into data strategies. Skills: sql,looker,python,segmentation,statistical modeling,pandas,tableau,jupyter,scikit-learn,a/b testing,panda,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 3 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Chennai

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 3 weeks ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Kolkata

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 3 weeks ago

Apply

3.0 - 6.0 years

8 - 17 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Work from Office

Role & responsibilities Duration : 6 Months + Extendable Job Locations: Any Protiviti Preferred/mandatory Skills: Role Focus: Support UAT and data validation during migration from legacy Cloudera to a big modern on-prem data platform. Core Tasks: Execute UAT, validate data pipelines (Hive, Impala, Spark, CDSW), perform quality checks, write SQL queries, and document test cases. Must-Have Skills: 3+ years in big data UAT/QA, strong SQL, experience with Cloudera tools, data validation, and platform migration exposure. Nice to Have: PySpark, Jupyter, data governance knowledge, telecom, or large enterprise experience. Preferred candidate profile

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Kolkata

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms.

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Mumbai

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Data Scientist Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Chennai

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 1 month ago

Apply

1.0 - 2.0 years

4 - 6 Lacs

Gurugram

Work from Office

Role & responsibilities Deliver subject-specific lectures, case studies, simulations, and workshops to PGDM/MBA/BBA/BCA students Develop course material, lesson plans, and hands-on activities aligned with AICTE / MDU and industry standards Use tools like Excel, Python, Tableau, R, SCM tools (for Operations), and simulation platforms Mentor students on projects, internships, and live case assignments Evaluate students performance through continuous internal assessments Organize skill labs, certification support sessions, and corporate knowledge exchange events Willing and excited to also take administrative responsibilities and job roles Subject-wise Requirements: 1. Business Analytics (Full-Time/Part-Time) Teach tools like Excel, Power BI, Python (basic), SQL, and analytics frameworks Deliver practical sessions on data cleaning, dashboards, decision support models Should be an all-rounder in B. Analytics subject areas like Market Microstructure, Predictive Business Analytics, Econometrics Eager to learn and teach on OS Softwares 2. Finance (with mandatory strong Excel exposure) Teach Financial Modelling, Valuation, MIS, Forecasting using Excel Must integrate Tally/Zoho/FinTech basics in classroom simulations Should be an all-rounder in Finance subject areas like Cost Accounting/GST/Income Tax Eager to learn and teach on OS softwares 3. Information Technology (AI/ML/Python/Data Structures/C++) Teach programming foundations with Python, C++, and exposure to ML/AI concepts Hands-on with Jupyter, Colab, GitHub, and coding practices Should be an all-rounder in IT concepts Eager to learn and teach 4. Operations & SCM (Transportation, Warehouse, Supply Chain Analytics) Teach SCM tools, TMS platforms, inventory modeling, Lean/6 Sigma basics Practical exposure to case-based supply chain problem-solving

Posted 1 month ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Data Analyst / Machine Learning Specialist Having meetings with team members regarding projects Collecting and interpreting data Automating and integrating processes Researching solutions to overcome data analytics challenges Developing complex mathematical models that integrate business rules and requirements Creating machine learning models Communicating and meeting with engineers, IT teams, and other interested parties Sharing complex ideas verbally and visually in an understandable manner with non-technical stakeholders Experience in technologies like: Python Jupyter Machine Learning Algorithms SQL Data Visualization Statistical or Mathematical software

Posted 1 month ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Hyderabad, Telangana, India

On-site

Data Analyst / Machine Learning Specialist Having meetings with team members regarding projects Collecting and interpreting data Automating and integrating processes Researching solutions to overcome data analytics challenges Developing complex mathematical models that integrate business rules and requirements Creating machine learning models Communicating and meeting with engineers, IT teams, and other interested parties Sharing complex ideas verbally and visually in an understandable manner with non-technical stakeholders Experience in technologies like: Python Jupyter Machine Learning Algorithms SQL Data Visualization Statistical or Mathematical software

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies