Jobs
Interviews

1441 Matplotlib Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Data Analytics; Data exploration and visualization; trending and forecasting; root cause analysis; user training and support; data presentation and storytelling; measure performance against business metrics and goals; project support Grade - 11 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date What Your Main Responsibilities Are Job Title: Data Analyst Location: Bengaluru Department: Customer & Retail Analytics Employment Type: Full-time About FedEx: FedEx provides customers and businesses worldwide with a broad portfolio of transportation, e-commerce, and business services and also serves our customers through our retail presence. We foster an environment of growth and learning, where innovative ideas are encouraged, and diverse teams are valued for their contributions. As part of our commitment to excellence, we’re seeking a highly skilled Data Analyst to join our analytics team. Job Summary: As a Data Analyst, you will play a key role in gathering, processing, and analyzing data to drive informed decision-making and actionable insights for FedEx. Your quantitative expertise and business acumen will help develop analytical solutions to improve operations, customer experience, and business outcomes. This role involves working closely with cross-functional teams to develop meaningful data analysis, insights summarizations/ visualizations and communicate findings that guide strategy and operational decisions. Key Responsibilities Collect, analyze, and interpret complex data sets using Python and SQL to support business objectives. Collaborate with stakeholders to understand business needs, formulate analytic solutions, and provide actionable insights. Develop and maintain data models and reports to track key performance indicators (KPIs) and business metrics. Create meaningful data visualizations to communicate findings, trends, and actionable insights to non-technical stakeholders. Conduct exploratory data analysis and identify patterns, trends, and opportunities for business improvement. Support data quality initiatives, ensuring accuracy and consistency across data sources. Utilize statistical and quantitative techniques to support problem-solving and business optimization efforts. Mandatory Skills What are we looking for : Python: Proficiency in data manipulation, data analysis libraries (Pandas, NumPy), and data visualization libraries (Matplotlib, Seaborn). SQL: Strong command of SQL for data extraction, transformation, and complex queries. Business Acumen: Ability to understand business context and objectives, aligning analytics with organizational goals. Quantitative Aptitude: Strong analytical and problem-solving skills, with a keen attention to detail. Data Visualization: Basic skills in data visualization to effectively communicate insights. Statistical Analysis: Foundational understanding of statistical methods (e.g., regression, hypothesis testing). Communication Skills: Ability to distill complex data insights into clear, actionable recommendations for stakeholders. Good-to-Have Skills Power BI: Experience with Power BI for data visualization and report development. Machine Learning Fundamentals: Basic knowledge of machine learning concepts for deeper pattern analysis. Advanced Excel: Skills in advanced Excel functions, pivot tables, and data cleaning for quick analyses. Qualifications Bachelor’s degree in Data Science, Statistics, Mathematics, Computer Science, Economics, or a related field. Master’s degree is a plus. 3+ years of experience in data analysis, preferably within the logistics, supply chain, or transportation industry. Excellent communication and interpersonal skills, with the ability to explain complex data insights to stakeholders. Strong organizational skills and a collaborative mindset. Join Our Team: If you are passionate about Data analytics to drive business impact and enhance the customer experience, we invite you to join our team at FedEx. Apply now to be part of a dynamic, innovative environment where your skills and expertise will make a difference. Application Process: To apply for this position, please submit your resume and a cover letter detailing your relevant experience and qualifications. Qualified candidates will be contacted for further evaluation. Analytical Skills; Accuracy & Attention to Detail; Numerical Skills; Planning & Organizing Skills; Presentation Skills; Statistical Knowledge; Data Modeling and Visualization Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Key Roles and Responsibilities of Position: Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Minimum Qualifications Bachelor’s degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Plotly, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Create visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using software such as Power BI. Preferred Qualifications An MS/PhD in Computer Science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Google, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills.

Posted 1 month ago

Apply

2.0 years

2 - 3 Lacs

Cochin

On-site

Job Description A Data Science Trainer is a professional who designs and delivers training programs to educate individuals and teams on data science concepts and techniques. They are responsible for creating and delivering engaging and effective training content that helps learners develop their data science skills. Responsibilities Design and develop training programs and curriculum for data science concepts and techniques Deliver training sessions to individuals and teams, both in-person and online Create and manage training materials such as presentations, tutorials, and exercises Monitor and evaluate the effectiveness of training programs Continuously update training materials and curriculum to reflect the latest trends and best practices in data science Provide one-on-one coaching and mentoring to learners Requirements A degree in a relevant field such as computer science, data science, statistics, or mathematics Strong understanding of data science concepts and techniques Experience with programming languages such as Python, R and SQL Strong presentation and communication skills Experience in training and/or teaching Experience with data visualization tools such as Tableau, Power BI or Matplotlib is a plus Knowledge of data science platform such as Scikit-learn, Tensorflow, Keras etc. is a plus. The role of a data science trainer requires a person who is passionate about teaching, has a solid understanding of data science and has the ability to adapt to the needs of the learners. They must be able to deliver training programs in an engaging and effective way, and must be able to continuously update the training materials to reflect the latest trends and best practices in data science. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Education: Master's (Preferred) Experience: Data scientist: 2 years (Preferred) Work Location: In person Expected Start Date: 01/07/2025

Posted 1 month ago

Apply

1.0 years

0 - 1 Lacs

Delhi

On-site

Job Title: Data Science Intern About the Role: We are looking for a motivated and detail-oriented Data Science Intern to join our dynamic team. This internship offers a unique opportunity to work on real-world data problems, gain hands-on experience with cutting-edge tools and technologies, and contribute to impactful projects. Key Responsibilities: Collect, clean, and preprocess large datasets from various sources. Perform exploratory data analysis to uncover insights and patterns. Build, test, and validate predictive models using statistical and machine learning techniques. Assist in developing data visualizations and dashboards to communicate findings. Collaborate with cross-functional teams to understand business needs and deliver data-driven solutions. Document processes, methodologies, and results clearly. Requirements: Pursuing or recently completed a degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Basic understanding of Python, R, or similar programming languages. Familiarity with data analysis libraries (Pandas, NumPy) and visualization tools (Matplotlib, Seaborn, Power BI, or Tableau). Knowledge of machine learning concepts is a plus. Strong analytical, problem-solving, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications: Experience with SQL or NoSQL databases. Exposure to cloud platforms like AWS, GCP, or Azure. Familiarity with version control (Git/GitHub). Job Type: Internship Contract length: 6 months Pay: ₹8,000.00 - ₹10,000.00 per month Schedule: Day shift Monday to Friday Morning shift Education: Bachelor's (Preferred) Experience: AI : 1 year (Required) ML : 1 year (Required) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Calcutta

Remote

Job Title: Data Science Trainer Location: [Remote/On-site/Hybrid] Job Type: [Full-time / Part-time / Contract / Freelance] Experience Level: [Mid-level / Senior] Reporting To: Head of Training / Program Manager Job Summary: We are seeking a skilled and passionate Data Science Trainer to deliver high-quality instruction in data science, machine learning, and AI. The ideal candidate will have strong theoretical knowledge and hands-on experience in real-world data projects. You will be responsible for designing curriculum, delivering training sessions, and mentoring learners across varying skill levels. Key Responsibilities: Deliver engaging and interactive training sessions in: Python for Data Science Statistics & Probability Machine Learning & Deep Learning Data Visualization (e.g., Tableau, Power BI, Matplotlib, Seaborn) Data Manipulation (Pandas, NumPy) Tools & Platforms (Jupyter, Google Colab, Git, AWS/GCP basics) Design and update training materials, assignments, case studies, and assessments. Provide one-on-one mentoring and guidance to learners. Evaluate learner progress and provide constructive feedback. Keep curriculum updated with the latest industry trends and technologies. Conduct code reviews and support learners with debugging. Participate in webinars, workshops, and online/offline community building. Requirements: Bachelor’s/Master’s degree in Data Science, Computer Science, Statistics, or related field. 5+ years of experience in data science or related domains. Prior experience in teaching or mentoring is a strong advantage. Strong command of Python, machine learning algorithms, and data processing libraries. Excellent communication and presentation skills. Ability to explain complex concepts in a clear and engaging way. Job Type: Full-time Pay: ₹10,915.53 - ₹59,065.16 per month Schedule: Morning shift Work Location: In person

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Data Scientist – Demand Forecasting Location: Remote Experience Level: 4–7 Years Employment Type: Contractual About the Role We are seeking a skilled and data-driven Data Scientist – Demand Forecasting to join our analytics team. In this role, you will be responsible for designing and implementing scalable, data-informed solutions that guide critical business decisions, including inventory management, workforce planning, and financial forecasting. You will collaborate closely with cross-functional teams, applying your expertise in machine learning and statistical modeling to optimize forecasting accuracy and business efficiency. The ideal candidate is curious, analytical, and a strong communicator who thrives on translating complex data into actionable insights. Key Responsibilities Develop and enhance demand forecasting models using machine learning and statistical techniques. Work closely with engineering, supply chain, and business stakeholders to deliver data-driven forecasting solutions. Conduct feature engineering, hyperparameter tuning, and rigorous model evaluation. Use Python and SQL to build and maintain reproducible, scalable code for data analysis and model deployment. Run experiments to test forecasting improvements and validate model performance with historical and real-time data. Translate technical concepts and model behavior into clear, actionable insights for non-technical stakeholders. Support deployment of algorithms into production systems and build data pipelines for automation. Continuously explore new data sources and methodologies to improve forecast accuracy. Communicate findings and recommendations through dashboards, presentations, and reports. Travel up to 20% internationally, if required. Required Qualifications Master’s degree in a quantitative field (e.g., Data Science, Statistics, Mathematics, Physics, Computer Science, or Engineering). 4–7 years of hands-on experience in data science, analytics, or a similar analytical role. Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) and SQL; experience with big data tools (e.g., Hadoop, Hive, Spark, or Scala) is a plus. Strong grasp of time-series forecasting techniques and statistical modeling. Solid skills in feature engineering, model evaluation, and hyperparameter optimization. Experience building production-ready, maintainable, and tested code. Ability to clearly communicate data assumptions, modeling approaches, and findings to both technical and non-technical stakeholders. Knowledge of data pipelines and integrating ML models into production systems. Collaborative mindset with strong problem-solving skills and the ability to work independently. Preferred Qualifications Experience in supply chain or logistics-related forecasting. Strong communication and stakeholder management skills. Familiarity with data visualization tools and libraries (e.g., Matplotlib, Seaborn, Plotly, Power BI, Tableau). Knowledge of Microsoft Azure cloud platform. Experience designing and building APIs for model integration. Why Join Us? Work on impactful, real-world business challenges. Join a collaborative and forward-thinking team. Enjoy flexibility with remote working options. Engage in continuous learning and cross-domain exposure.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Exp – 15 to 23yrs Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Job Summary Auriga IT is seeking a proactive, problem-solving Data Analyst with 3–5 years of experience owning end-to-end data pipelines. You’ll partner with stakeholders across engineering, product, marketing, and finance to turn raw data into actionable insights that drive business decisions. You must be fluent in the core libraries, tools, and cloud services listed below. Your Responsibilities Pipeline Management Design, build, and maintain ETL/ELT workflows using orchestration frameworks (e.g., Airflow, dbt). Exploratory Data Analysis & Visualization Perform EDA and statistical analysis using Python or R. Prototype and deliver interactive charts and dashboards. BI & Reporting Develop dashboards and scheduled reports to surface KPIs and trends. Configure real-time alerts for data anomalies or thresholds. Insights Delivery & Storytelling Translate complex analyses (A/B tests, forecasting, cohort analysis) into clear recommendations. Present findings to both technical and non-technical audiences. Collaboration & Governance Work cross-functionally to define data requirements, ensure quality, and maintain governance. Mentor junior team members on best practices in code, version control, and documentation. Key Skills You must know at least one technology from each category below: Data Manipulation & Analysis Python: pandas, NumPy R: tidyverse (dplyr, tidyr) Visualization & Dashboarding Python: matplotlib, seaborn, Plotly R: ggplot2, Shiny BI Platforms Commercial or Open-Source (e.g., Tableau, Power BI, Apache Superset, Grafana) ETL/ELT Orchestration Apache Airflow, dbt, or equivalent Cloud Data Services AWS (Redshift, Athena, QuickSight) GCP (BigQuery, Data Studio) Azure (Synapse, Data Explorer) Databases & Querying RDBMS Strong SQL Skill (PostgreSQL, MySQL, Snowflake) Decent Knowledge of NoSQL databases Additionally Bachelor’s or Master’s in a quantitative field (Statistics, CS, Economics, etc.). 3–5 years in a data analyst (or similar) role with end-to-end pipeline ownership. Strong problem-solving mindset and excellent communication skills. Certification of power BI, Tableau is a plus Desired Skills & Attributes Familiarity with version control (Git) and CI/CD for analytics code. Exposure to basic machine-learning workflows (scikit-learn, caret). Comfortable working in Agile/Scrum environments and collaborating across domains. About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/

Posted 1 month ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Software Developer Intern - Fixed-Term Chennai, India Who We Are INVIDI Technologies Corporation is the world's leading developer of software transforming television all over the world. Our two-time Emmy® Award-winning technology is widely deployed by cable, satellite, and telco operators. We provide a device-agnostic solution delivering ads to the right household no matter what program or network you’re watching, how you're watching, or whether you’re in front of your TV, laptop, cell phone or any other device. INVIDI created the multi-billion-dollar addressable television business that today is growing rapidly globally. INVIDI is right at the heart of the very exciting and fast-paced world of commercial television; companies benefiting from our software include DirecTV, Dish Network, and Verizon, networks such as CBS/Viacom and A&E, advertising agencies such as Ogilvy and Publicis, and advertisers such as Chevrolet and Allstate. INVIDI’s world-class technology solutions are known for their flexibility and adaptability. These traits allow INVIDI partners to transform their video content delivery network, revamping legacy systems without significant capital or hardware investments. Our clients count on us to provide superior capabilities, excellent service, and ease of use. The goal of developing a unified video ad tech platform is a big one and the right Software Developer Intern --like you--flourish in INVIDI’s creative, inspiring, and supportive culture. It is a demanding, high-energy, and fast-paced environment. About The Role As a Software Developer Intern, you have a grounding in data analysis using tools in the Python ecosystem and AWS. You also have a business sense for asking and answering fundamental questions to help shape key strategic decisions. This role involves thinking critically and strategically about video ad delivery as a technology, as a business, and as an operation to help broadcasters, distributors, and media companies transform and evolve their advertising practices through the use of data. Using proven design patterns, you will help identify opportunities for INVIDI and our clients to operate more efficiently and produce innovative and actionable quantitative models and analyses to address the challenges of marketing effectiveness and measurement. As a Software Developer Intern, you will do more than just crunch the numbers. You will work with Engineers, Product Managers, Sales Associates and Marketing teams to adjust business practices according to your findings. Identifying the problem is only half the job; you also need to figure out the solution. You must be versatile, display leadership qualities and be enthusiastic to take on new problems as we continue to push technology forward. The position will report directly to the Technical Manager of Software Development and will be based in our Chennai, India office. Key Responsibilities Build business intelligence dashboards using AWS Quicksight, Tableau, Excel and power BI Design, develop and deploy Machine learning models using Linear regression, Classification, Neural networks, Time Series forecasting. Engage broadly within the organization to identify, prioritize, frame, and structure reporting problems Help define the analytical direction and influence the direction of data engineering and infrastructure work Conduct end-to-end analyses, including data gathering, requirements specification, processing, analysis, ongoing deliverables, and presentations Translate analysis results into business recommendations Develop comprehensive understanding of video content inventory, scheduling, customer segmentation, video distribution, viewership data structures and metrics Recommend and implement strategies to solve business problems when availability of data is limited Work with very large data sets to glean useful insights that are valuable to the business. You Must Have Bachelor’s degree in computer science, information management systems, or Data Science discipline or equivalent practical experience Any experience in statistical data analysis, linear models, multivariate analysis, stochastic models, and sampling methods Experience in data modelling in SQL databases. Understanding RESTful APIs Any experience in Data visualization tools like AWS Quicksight Tableau, Power BI Any experience in EDA in Notebook based environments like Jupyter Notebook using Python Pandas, NumPy, Matplotlib It would be very good if you have experience in: Experience in building marketing analytics dashboards Working with Scrum teams in an Agile way Experience in the following domains: video content delivery, viewership measurement, advertising technology, digital advertising Physical Requirements INVIDI is a conscious, clean, well-organized, and supportive office environment. Prolonged periods of sitting at a desk and working on a computer are normal. Note This is an intern position for a period of 1 year Final candidates must successfully pass INVIDI’s background screening requirements. Final candidates must be legally authorized to work in India. INVIDI has reopened its offices on a flexible hybrid model. Ready to join our team? Apply today!

Posted 1 month ago

Apply

2.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Role: Data Science, AI & ML Trainer Company: QBrainX (https://qbrainx.com/) Role: Data Science and AI ML Trainer for Kodo IT Program by QBrainX (https://xnovaq.com/kodo-program) Work Arrangement: Work from Office Office Location: Tidel Park, Coimbatore We are seeking a passionate and skilled Data Science, AI & ML Trainer with 2-5 years of experience to join our team. The ideal candidate will deliver engaging, hands-on training sessions, simplifying complex concepts and guiding learners through practical, real-world projects. This role is perfect for an individual enthusiastic about teaching and staying at the forefront of AI/ML advancements. Key Responsibilities Design and deliver high-quality training sessions on Python, Data Analysis, Machine Learning, and AI fundamentals. Collaborate with the curriculum development team to create relevant course content, hands-on exercises, and mini-projects. Mentor and support learners, addressing technical queries and fostering a collaborative learning environment. Stay updated with the latest tools, frameworks, and trends in Data Science, AI, and ML to ensure training content remains current. Evaluate learner progress and provide constructive feedback to enhance skill development. Contribute to the creation of training materials, including presentations, tutorials, and case studies. Requirements 2-5 years of professional experience in Data Science, AI/ML roles, or training. Strong proficiency in Python and key libraries such as Pandas, NumPy, and Scikit-learn. Excellent communication and presentation skills, with the ability to explain complex concepts clearly. Passion for teaching, mentoring, and empowering learners. Strong organizational skills and the ability to manage multiple training sessions effectively. Proactive and self-motivated with a commitment to continuous learning. Good to Have Experience with Deep Learning frameworks such as TensorFlow or PyTorch. Familiarity with Natural Language Processing (NLP) or cloud platforms (e.g., AWS, Azure, GCP). Prior experience conducting workshops, bootcamps, or corporate training sessions. Knowledge of data visualization tools like Matplotlib, Seaborn, or Tableau. Benefits Join a dynamic team dedicated to shaping the next generation of Data Science and AI professionals. This role offers the opportunity to make a meaningful impact through teaching, while staying connected to cutting-edge developments in AI and ML. If you are excited about empowering learners and have the skills to excel as a Data Science, AI & ML Trainer, we encourage you to apply!

Posted 1 month ago

Apply

2.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Role: Data Science, AI & ML Trainer Salary: 4 LPA + Variable Pay Company: QBrainX (https://qbrainx.com/) Role: Data Science and AI ML Trainer for Kodo IT Program by QBrainX (https://xnovaq.com/kodo-program) Work Arrangement: Work from Office Office Location: Tidel Park, Coimbatore We are seeking a passionate and skilled Data Science, AI & ML Trainer with 2-5 years of experience to join our team. The ideal candidate will deliver engaging, hands-on training sessions, simplifying complex concepts and guiding learners through practical, real-world projects. This role is perfect for an individual enthusiastic about teaching and staying at the forefront of AI/ML advancements. Key Responsibilities Design and deliver high-quality training sessions on Python, Data Analysis, Machine Learning, and AI fundamentals. Collaborate with the curriculum development team to create relevant course content, hands-on exercises, and mini-projects. Mentor and support learners, addressing technical queries and fostering a collaborative learning environment. Stay updated with the latest tools, frameworks, and trends in Data Science, AI, and ML to ensure training content remains current. Evaluate learner progress and provide constructive feedback to enhance skill development. Contribute to the creation of training materials, including presentations, tutorials, and case studies. Requirements 2-5 years of professional experience in Data Science, AI/ML roles, or training. Strong proficiency in Python and key libraries such as Pandas, NumPy, and Scikit-learn. Excellent communication and presentation skills, with the ability to explain complex concepts clearly. Passion for teaching, mentoring, and empowering learners. Strong organizational skills and the ability to manage multiple training sessions effectively. Proactive and self-motivated with a commitment to continuous learning. Good to Have Experience with Deep Learning frameworks such as TensorFlow or PyTorch. Familiarity with Natural Language Processing (NLP) or cloud platforms (e.g., AWS, Azure, GCP). Prior experience conducting workshops, bootcamps, or corporate training sessions. Knowledge of data visualization tools like Matplotlib, Seaborn, or Tableau. Benefits Join a dynamic team dedicated to shaping the next generation of Data Science and AI professionals. This role offers the opportunity to make a meaningful impact through teaching, while staying connected to cutting-edge developments in AI and ML. If you are excited about empowering learners and have the skills to excel as a Data Science, AI & ML Trainer, we encourage you to apply!

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities:-Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.-Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services.-Optimizing existing generative AI models for improved performance, scalability, and efficiency.-Ensure data quality and accuracy-Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models.-Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models.-Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers-Experience working with cloud based platforms (exampleAWS, Azure or related)-Strong problem-solving and analytical skills-Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI-Prior statistical modelling experience-Demonstrable experience with deep learning algorithms and neural networks-Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.-Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs.-Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras.-Must have strong knowledge of data structures, algorithms, and software engineering principles.-Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure.-Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face.-Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly.-Need to have knowledge of software development methodologies, such as Agile or Scrum.-Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information:-Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable.-strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience.-You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities:-Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.-Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services.-Optimizing existing generative AI models for improved performance, scalability, and efficiency.-Ensure data quality and accuracy-Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models.-Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models.-Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers-Experience working with cloud based platforms (exampleAWS, Azure or related)-Strong problem-solving and analytical skills-Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI-Prior statistical modelling experience-Demonstrable experience with deep learning algorithms and neural networks-Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.-Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs.-Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras.-Must have strong knowledge of data structures, algorithms, and software engineering principles.-Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure.-Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face.-Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly.-Need to have knowledge of software development methodologies, such as Agile or Scrum.-Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information:-Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable.-strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience.-You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Qualification 15 years full time education

Posted 1 month ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Internship Overview As a Data Scientist Intern, you will work alongside our data team to analyze user behavior, improve recommendation systems, and uncover insights that shape product strategy. This is a hands-on role where you’ll apply statistical modeling, machine learning, and big data tools to real-world challenges in social media. Key Responsibilities Analyze large-scale user data to identify trends in engagement, retention, and content performance. Develop and test machine learning models (e.g., for recommendations, sentiment analysis, or spam detection). Collaborate with product and engineering teams to design A/B tests and measure impact. Build dashboards and visualizations to communicate insights (using Python, SQL, Tableau, etc.). Research cutting-edge techniques in NLP, computer vision, or graph analytics (as applicable). Document findings and present recommendations to stakeholders. Qualifications & Skills Required: Pursuing a degree in Data Science, Computer Science, Statistics, or a related field. Proficiency in Python (Pandas, NumPy, Scikit-learn) and SQL. Familiarity with machine learning concepts (supervised/unsupervised learning, neural networks). Experience with data visualization (Matplotlib, Seaborn, Tableau/Power BI). Strong analytical and problem-solving skills. Curiosity about social media dynamics and user behavior. Preferred (Bonus Skills): Knowledge of big data tools (Spark, Hadoop) or cloud platforms (AWS, GCP). Exposure to deep learning frameworks (TensorFlow, PyTorch) or NLP (BERT, GPT). Previous internships or projects involving social media data analysis. Active GitHub profile or portfolio showcasing data science work. Why Join MOC Spark? Real-world impact: Your work will directly influence millions of users. Mentorship: Learn from senior data scientists and engineers. Perks: Flexible hours, stipend/compensation, Path to full-time: Top performers may receive return offers.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition Id : 1620337 As a global leader in assurance, tax, transaction and advisory services, we hire and develop the most passionate people in their field to help build a better working world. This starts with a culture that believes in giving you the training, opportunities and creative freedom. At EY, we don't just focus on who you are now, but who you can become. We believe that it’s your career and ‘It’s yours to build’ which means potential here is limitless and we'll provide you with motivating and fulfilling experiences throughout your career to help you on the path to becoming your best professional self. The opportunity : Consultant-National-Tax-TAX - Indirect Tax - Indirect Tax Global Trade - Gurgaon TAX - Indirect Tax - Indirect Tax Global Trade : Our globally connected tax professionals offer associated services across all tax disciplines to help the clients prosper in an era of rapid change. We combine our exceptional knowledge and experience with the people and technology platforms to equip clients in making better business decisions by bringing insights to the forefront. We help companies recognize the tax policies and tax laws of governments around the world to plan and comply accordingly. Our teams leverage transformative technologies to deliver strategy and execution, from tax and regulatory obligations to operations and workforce management, to reduce risk and drive sustainable value. EY has competencies in Business Tax Services, Global Compliance and Reporting, Global Law, Indirect Tax, International Tax and Transaction Services. Your key responsibilities Technical Excellence Expert in Python, Machine Learning, Postgres, Cloud Hands on API development Strong problem solving and troubleshooting skills Excellent team player with strong verbal & written communication skills. Exposure in Docker, Kubernetes, Continuous Delivery practices - Agile Scrum activities Exposure to Libraries Tensorflow, Keras, Numpy, sklearn, pandas, imutils, imgaug, lxml, opencv, matplotlib is Plus Exposure to big data technologies like Spark, Hive, Hadoop is desired Skills and attributes To qualify for the role you must have Qualification Graduate/Postgraduate in Engineering / Mathematics / Statistics Experience Developement in technical stacks/Full stack developement/databases What we look for People with the ability to work in a collaborative manner to provide services across multiple client departments while following the commercial and legal requirements. You will need a practical approach to solving issues and complex problems with the ability to deliver insightful and practical solutions. We look for people who are agile, curious, mindful and able to sustain postivie energy, while being adaptable and creative in their approach. What we offer With more than 200,000 clients, 300,000 people globally and 33,000 people in India, EY has become the strongest brand and the most attractive employer in our field, with market-leading growth over compete. Our people work side-by-side with market-leading entrepreneurs, game- changers, disruptors and visionaries. As an organisation, we are investing more time, technology and money, than ever before in skills and learning for our people. At EY, you will have a personalized Career Journey and also the chance to tap into the resources of our career frameworks to better know about your roles, skills and opportunities. EY is equally committed to being an inclusive employer and we strive to achieve the right balance for our people - enabling us to deliver excellent client service whilst allowing our people to build their career as well as focus on their wellbeing. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition Id : 1620303 As a global leader in assurance, tax, transaction and advisory services, we hire and develop the most passionate people in their field to help build a better working world. This starts with a culture that believes in giving you the training, opportunities and creative freedom. At EY, we don't just focus on who you are now, but who you can become. We believe that it’s your career and ‘It’s yours to build’ which means potential here is limitless and we'll provide you with motivating and fulfilling experiences throughout your career to help you on the path to becoming your best professional self. The opportunity : Consultant-National-Tax-TAX - Indirect Tax - Indirect Tax Global Trade - Gurgaon TAX - Indirect Tax - Indirect Tax Global Trade : Our globally connected tax professionals offer associated services across all tax disciplines to help the clients prosper in an era of rapid change. We combine our exceptional knowledge and experience with the people and technology platforms to equip clients in making better business decisions by bringing insights to the forefront. We help companies recognize the tax policies and tax laws of governments around the world to plan and comply accordingly. Our teams leverage transformative technologies to deliver strategy and execution, from tax and regulatory obligations to operations and workforce management, to reduce risk and drive sustainable value. EY has competencies in Business Tax Services, Global Compliance and Reporting, Global Law, Indirect Tax, International Tax and Transaction Services. Your key responsibilities Technical Excellence Expert in Python, Machine Learning, Postgres, Cloud Hands on API development Strong problem solving and troubleshooting skills Excellent team player with strong verbal & written communication skills. Exposure in Docker, Kubernetes, Continuous Delivery practices - Agile Scrum activities Exposure to Libraries Tensorflow, Keras, Numpy, sklearn, pandas, imutils, imgaug, lxml, opencv, matplotlib is Plus Exposure to big data technologies like Spark, Hive, Hadoop is desired Skills and attributes To qualify for the role you must have Qualification Graduate/Postgraduate in Engineering / Mathematics / Statistics Experience Developement in technical stacks/Full stack developement/databases What we look for People with the ability to work in a collaborative manner to provide services across multiple client departments while following the commercial and legal requirements. You will need a practical approach to solving issues and complex problems with the ability to deliver insightful and practical solutions. We look for people who are agile, curious, mindful and able to sustain postivie energy, while being adaptable and creative in their approach. What we offer With more than 200,000 clients, 300,000 people globally and 33,000 people in India, EY has become the strongest brand and the most attractive employer in our field, with market-leading growth over compete. Our people work side-by-side with market-leading entrepreneurs, game- changers, disruptors and visionaries. As an organisation, we are investing more time, technology and money, than ever before in skills and learning for our people. At EY, you will have a personalized Career Journey and also the chance to tap into the resources of our career frameworks to better know about your roles, skills and opportunities. EY is equally committed to being an inclusive employer and we strive to achieve the right balance for our people - enabling us to deliver excellent client service whilst allowing our people to build their career as well as focus on their wellbeing. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now.

Posted 1 month ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In Your Role, You May Be Responsible For Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred

Posted 1 month ago

Apply

6.0 - 10.0 years

3 - 4 Lacs

Hyderābād

On-site

GROWING WHAT MATTERS STARTS WITH YOU As the world’s only major agriscience company completely dedicated to agriculture, we’re building a culture that stays curious, thinks differently, acts boldly, and takes a stand on what’s right for our customers, our co-workers, our partners and our planet. We know we’ve got big challenges to solve - we hope you'll be part of the solution. Working at Corteva connects you with more than 20,000 colleagues united by a shared vision to grow what matters. We offer career opportunities across more than 140 world-class R&D facilities and in more than 130 countries. We’re hiring for Reporting & Analytics to join our Finance team! Learn how you can be our voice in the conversation about the future of agriculture. You Will Be Part of Growing Team Finance is a Global team tasked with supporting Finance processing requests across various regions. Our team is comprised of members supporting and providing support to the Finance Team and business across from various regions. The role will be performed within the frame of Corteva’ s Brand values: Job responsibilities The ideal candidate will combine deep knowledge of finance operations (specifically Payables and SAP FICO) with technical proficiency in Power Platform (Power BI, Power Apps, Power Automate), SQL, Azure, SharePoint, VBA Macros, MS Access database management and Python. This role will be instrumental in driving automation, analytics, and insights to improve financial reporting, compliance, and operational efficiency. Providing Strategic, Analytic & Reporting support to Global Service Centers and Payables across regions. MIS reporting for Accounts Payable processes including vendor payments, ageing analysis, GR/IR and payment forecast reports and compliance metrics. Develop and deploy automated dashboards and reports using Power BI and SQL for internal stakeholders and auditors to bring some clarity to complex AP data. Automate finance workflows using Power Automate and Excel VBA/Macros —think reconciliation, reminders, and reporting. Explore opportunities to automate manual processes. Leverage SAP FICO for reporting, audit trails, and transaction analysis. Identify, analyze, and interpret trends or patterns in complex data sets. Transform data using Python and SQL for reporting. Manage data pipelines through Azure Data Services , integrating inputs from SAP, Excel, and cloud databases. Use Python for automation : bulk file processing, vendor statement reconciliation, and email/report workflows automation. Competent in Analysis & Judgment, Customer Relationship Management, BI tools & Microsoft Suite. Should have sufficient Procure to Pay knowledge. Partner with Procurement, Supply Chain, IT, and Treasury teams to ensure data consistency and reporting alignment. Manage, coach and develop team members Explore and implement continuous improvement with an owner’s mindset. Accountable for managing the Supplier Payments database for entire organization and provide Strategic, Analytic & Reporting support to Global Service Centers and P2P across regions Location: Corteva Global Service Center, Hyderabad, India To Grow What Matters, You Will Need: Bachelor’s or master’s degree in finance, Accounting, or a related field. 6–10 years of relevant experience in Finance MIS or PTP analytic roles. Strong working knowledge of SAP FICO – especially AP-related T-codes and tables. Knowledge of ERP system, statistics and experience using statistical packages for analyzing large datasets (Excel, SPSS, SAS etc.) is preferable Technical Skills: Strong Knowledge on reporting packages [Business objects] Advanced Excel with hands-on experience in VBA/macros . Proficiency in Power BI , Power Automate , and Power Apps . Strong SQL scripting and experience in working with relational databases. Exposure to Microsoft Azure (Data Factory, Synapse, or Logic Apps) is highly desirable. Experience in data modeling, cleansing , and performance tuning for large datasets. Python for data analysis and automation (e.g., pandas, matplotlib, openpyxl) Soft Skills : Strong analytical mindset and attention to detail. Effective communication and ability to collaborate with cross-functional teams. Proactive problem-solver with a process improvement orientation. Ability to manage deadlines and prioritize in a fast-paced environment Preferred Skills (Optional but a plus) Microsoft Certified: Power Platform Fundamentals or Data Analyst Associate SAP FICO Certification Azure Data Fundamentals. Who Are We Looking For? Curious, bold thinkers who want to grow their careers and be part of a winning team. Market shaping individuals who want to transform the agriculture industry to meet the world’s growing need for food. Collaborators who thrive in a diverse, inclusive work environment Innovators who bring initiative and fresh ideas that drive our business into the future and make us an industry leader. GROWING WHAT MATTERS STARTS WITH YOU… WHAT CAN WE OFFER TO HELP YOU GROW? Opportunity to be part of a global industry leader working to discover solutions to the most pressing agricultural challenges of our time. Challenging work assignments that grow your skills, capabilities and experiences. Diverse, inclusive work environment where employees bring their whole selves to work and feel heard, valued and empowered. Dedicated and customized resource to help grow your professional skills, industry expertise and personal perspectives. Opportunity to strengthen your professional network through valuable relationships. Support for the health and well-being of every employee by offering world-class benefits, meaningful work and competitive salary. Performance driven culture with a strong focus on speed, accountability and agility.

Posted 1 month ago

Apply

0.0 - 5.0 years

5 - 20 Lacs

Gurgaon

On-site

Assistant Manager EXL/AM/1349734 ServicesGurgaon Posted On 30 May 2025 End Date 14 Jul 2025 Required Experience 0 - 5 Years Basic Section Number Of Positions 1 Band B1 Band Name Assistant Manager Cost Code D003152 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill PYTHON SQL Minimum Qualification B.TECH/B.E Certification No data available Job Description We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities: Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills: Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills: Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications: Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Workflow Workflow Type L&S-DA-Consulting

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Roles & Responsibilities: • Be responsible for the development of the conceptual, logical, and physical data models • Work with application/solution teams to implement data strategies, build data flows and develop/execute logical and physical data models • Implement and maintain data analysis scripts using SQL and Python. • Develop and support reports and dashboards using Google Plx/Data Studio/Looker. • Monitor performance and implement necessary infrastructure optimizations. • Demonstrate ability and willingness to learn quickly and complete large volumes of work with high quality. • Demonstrate excellent collaboration, interpersonal communication and written skills with the ability to work in a team environment. Minimum Qualifications • Hands-on experience with design, development, and support of data pipelines • Strong SQL programming skills (Joins, sub queries, queries with analytical functions, stored procedures, functions etc.) • Hands-on experience using statistical methods for data analysis • Experience with data platform and visualization technologies such as Google PLX dashboards, Data Studio, Tableau, Pandas, Qlik Sense, Splunk, Humio, Grafana • Experience in Web Development like HTML, CSS, jQuery, Bootstrap • Experience in Machine Learning Packages like Scikit-Learn, NumPy, SciPy, Pandas, NLTK, BeautifulSoup, Matplotlib, Statsmodels. • Strong design and development skills with meticulous attention to detail. • Familiarity with Agile Software Development practices and working in an agile environment • Strong analytical, troubleshooting and organizational skills • Ability to analyse and troubleshoot complex issues, and proficiency in multitasking • Ability to navigate ambiguity is great • BS degree in Computer Science, Math, Statistics or equivalent academic credentials

Posted 1 month ago

Apply

3.0 years

1 - 6 Lacs

India

On-site

About Us: Analytics Circle is a leading institute dedicated to empowering individuals with in-demand data analytics skills. We are passionate about bridging the industry-academia gap through practical, hands-on training in the most sought-after tools and technologies in data analytics. Job Description: We are looking for a highly skilled and passionate Data Analyst Trainer to join our growing team. The ideal candidate should have real-world industry experience and a strong command over Advanced Excel, Power BI, Tableau, SQL, and Python. As a trainer, you will be responsible for delivering engaging and insightful sessions to our learners, preparing them for careers in data analytics. Key Responsibilities: Deliver interactive and practical training sessions on: Advanced Excel Power BI Tableau SQL Python for Data Analysis Design and update course materials, case studies, and hands-on projects based on current industry trends. Evaluate student progress through assignments, projects, and assessments. Provide one-on-one mentorship and support to learners when needed. Assist in curriculum development and continuous improvement of training content. Stay updated with the latest developments in data analytics tools and technologies. Requirements: Bachelor’s or Master’s degree in Computer Science, Statistics, Data Science, or a related field. Minimum 3+ years of experience in the data analytics domain. Proven training or teaching experience is preferred. Proficiency in: Excel (including pivot tables, lookups, macros, dashboards) Power BI (DAX, Power Query, data modeling) Tableau (data visualization, dashboard building) SQL (queries, joins, data manipulation) Python (Pandas, NumPy, Matplotlib, data analysis workflows) Strong communication and presentation skills. Passion for teaching and mentoring. Nice to Have: Industry certifications in relevant tools (e.g., Microsoft, Tableau, Python). Experience conducting online training/webinars. Job Types: Full-time, Part-time, Permanent Pay: ₹10,000.00 - ₹50,000.00 per month Schedule: Day shift Supplemental Pay: Commission pay Performance bonus Application Question(s): Weekdays Availability Monday to Friday Education: Bachelor's (Preferred) Experience: Teaching: 3 years (Preferred) Location: Laxmi Nagar, Delhi, Delhi (Required) Shift availability: Day Shift (Required) Work Location: In person

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Growing What Matters Starts With You As the world’s only major agriscience company completely dedicated to agriculture, we’re building a culture that stays curious, thinks differently, acts boldly, and takes a stand on what’s right for our customers, our co-workers, our partners and our planet. We know we’ve got big challenges to solve - we hope you'll be part of the solution. Working at Corteva connects you with more than 20,000 colleagues united by a shared vision to grow what matters. We offer career opportunities across more than 140 world-class R&D facilities and in more than 130 countries. We’re hiring for Reporting & Analytics to join our Finance team! Learn how you can be our voice in the conversation about the future of agriculture. You Will Be Part of Growing Team Finance is a Global team tasked with supporting Finance processing requests across various regions. Our team is comprised of members supporting and providing support to the Finance Team and business across from various regions. The role will be performed within the frame of Corteva’ s Brand values: Job Responsibilities The ideal candidate will combine deep knowledge of finance operations (specifically Payables and SAP FICO) with technical proficiency in Power Platform (Power BI, Power Apps, Power Automate), SQL, Azure, SharePoint, VBA Macros, MS Access database management and Python. This role will be instrumental in driving automation, analytics, and insights to improve financial reporting, compliance, and operational efficiency. Providing Strategic, Analytic & Reporting support to Global Service Centers and Payables across regions. MIS reporting for Accounts Payable processes including vendor payments, ageing analysis, GR/IR and payment forecast reports and compliance metrics. Develop and deploy automated dashboards and reports using Power BI and SQL for internal stakeholders and auditors to bring some clarity to complex AP data. Automate finance workflows using Power Automate and Excel VBA/Macros—think reconciliation, reminders, and reporting. Explore opportunities to automate manual processes. Leverage SAP FICO for reporting, audit trails, and transaction analysis. Identify, analyze, and interpret trends or patterns in complex data sets. Transform data using Python and SQL for reporting. Manage data pipelines through Azure Data Services, integrating inputs from SAP, Excel, and cloud databases. Use Python for automation: bulk file processing, vendor statement reconciliation, and email/report workflows automation. Competent in Analysis & Judgment, Customer Relationship Management, BI tools & Microsoft Suite. Should have sufficient Procure to Pay knowledge. Partner with Procurement, Supply Chain, IT, and Treasury teams to ensure data consistency and reporting alignment. Manage, coach and develop team members Explore and implement continuous improvement with an owner’s mindset. Accountable for managing the Supplier Payments database for entire organization and provide Strategic, Analytic & Reporting support to Global Service Centers and P2P across regions Location: Corteva Global Service Center, Hyderabad, India To Grow What Matters, You Will Need Bachelor’s or master’s degree in finance, Accounting, or a related field. 6–10 years of relevant experience in Finance MIS or PTP analytic roles. Strong working knowledge of SAP FICO – especially AP-related T-codes and tables. Knowledge of ERP system, statistics and experience using statistical packages for analyzing large datasets (Excel, SPSS, SAS etc.) is preferable Technical Skills Strong Knowledge on reporting packages [Business objects] Advanced Excel with hands-on experience in VBA/macros. Proficiency in Power BI, Power Automate, and Power Apps. Strong SQL scripting and experience in working with relational databases. Exposure to Microsoft Azure (Data Factory, Synapse, or Logic Apps) is highly desirable. Experience in data modeling, cleansing, and performance tuning for large datasets. Python for data analysis and automation (e.g., pandas, matplotlib, openpyxl) Soft Skills Strong analytical mindset and attention to detail. Effective communication and ability to collaborate with cross-functional teams. Proactive problem-solver with a process improvement orientation. Ability to manage deadlines and prioritize in a fast-paced environment Preferred Skills (Optional But a Plus) Microsoft Certified: Power Platform Fundamentals or Data Analyst Associate SAP FICO Certification Azure Data Fundamentals. Who Are We Looking For? Curious, bold thinkers who want to grow their careers and be part of a winning team. Market shaping individuals who want to transform the agriculture industry to meet the world’s growing need for food. Collaborators who thrive in a diverse, inclusive work environment Innovators who bring initiative and fresh ideas that drive our business into the future and make us an industry leader. GROWING WHAT MATTERS STARTS WITH YOU… WHAT CAN WE OFFER TO HELP YOU GROW? Opportunity to be part of a global industry leader working to discover solutions to the most pressing agricultural challenges of our time. Challenging work assignments that grow your skills, capabilities and experiences. Diverse, inclusive work environment where employees bring their whole selves to work and feel heard, valued and empowered. Dedicated and customized resource to help grow your professional skills, industry expertise and personal perspectives. Opportunity to strengthen your professional network through valuable relationships. Support for the health and well-being of every employee by offering world-class benefits, meaningful work and competitive salary. Performance driven culture with a strong focus on speed, accountability and agility.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Jaipur

On-site

Job Summary Auriga IT is seeking a proactive, problem-solving Data Analyst with 3–5 years of experience owning end-to-end data pipelines. You’ll partner with stakeholders across engineering, product, marketing, and finance to turn raw data into actionable insights that drive business decisions. You must be fluent in the core libraries, tools, and cloud services listed below. Your Responsibilities: Pipeline Management Design, build, and maintain ETL/ELT workflows using orchestration frameworks (e.g., Airflow, dbt). Exploratory Data Analysis & Visualization Perform EDA and statistical analysis using Python or R . Prototype and deliver interactive charts and dashboards. BI & Reporting Develop dashboards and scheduled reports to surface KPIs and trends. Configure real-time alerts for data anomalies or thresholds. Insights Delivery & Storytelling Translate complex analyses (A/B tests, forecasting, cohort analysis) into clear recommendations. Present findings to both technical and non-technical audiences. Collaboration & Governance Work cross-functionally to define data requirements, ensure quality, and maintain governance. Mentor junior team members on best practices in code, version control, and documentation. Key Skills: You must know at least one technology from each category below: Data Manipulation & Analysis Python: pandas, NumPy R: tidyverse (dplyr, tidyr) Visualization & Dashboarding Python: matplotlib, seaborn, Plotly R: ggplot2, Shiny BI Platforms Commercial or Open-Source (e.g., Tableau, Power BI, Apache Superset, Grafana) ETL/ELT Orchestration Apache Airflow, dbt, or equivalent Cloud Data Services AWS (Redshift, Athena, QuickSight) GCP (BigQuery, Data Studio) Azure (Synapse, Data Explorer) Databases & Querying RDBMS Strong SQL Skill (PostgreSQL, MySQL, Snowflake) Decent Knowledge of NoSQL databases Additionally : Bachelor’s or Master’s in a quantitative field (Statistics, CS, Economics, etc.). 3–5 years in a data analyst (or similar) role with end-to-end pipeline ownership. Strong problem-solving mindset and excellent communication skills. Certification of power BI, Tableau is a plus Desired Skills & Attributes Familiarity with version control (Git) and CI/CD for analytics code. Exposure to basic machine-learning workflows (scikit-learn, caret). Comfortable working in Agile/Scrum environments and collaborating across domains. About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Primary Duties & Responsibilities Requirement gathering from the business users Analyzing the business requirements and converting the requirement into Machine learning/Generative AI problem Identifying the needed data features and labels Data Ingestion from various Source databases into Big Data Environment. Building related models by using Machine learning and Generative AI techniques Recommending the best suitable solution for the given business requirement Presenting the solution to Business users and stakeholders by best story telling approaches Education & Experience Bachelor’s / Master’s degree in Computer Science Engineering / Master’s degree in Statistics / Mathematics. 3 years minimum Experience in building various Machine Learning, Deep learning regression and classification models 1-year minimum experience in Generative AI tech stack (Prompt engineering, ChatGPT, Ollama, Gemini, RAG etc) Extensive knowledge and skills in frameworks (like TensorFlow, Pytorch, Huggingface), libraries (LlamaIndex, Langchain) Expertise in Python and Visualization Python modules like matplotlib, seaborn Knowledge of Big data systems, Data ingestion tools/processes/techniques like Streamsets, Spark Desirable skills: REST API development using Flask, Fast API Skills Strong interpersonal, and problem-solving skills. Strong stakeholder management. Work effectively with other members of Coherent Corp across the globe. Working Conditions Hybrid work structure . i.e. 3 days in office. Culture Commitment Ensure adherence to company’s values (ICARE) in all aspects of your position at Coherent Corp.: I ntegrity – Create an Environment of Trust C ollaboration – Innovate Through the Sharing of Ideas A ccountability – Own the Process and the Outcome R espect – Recognize the Value in Everyone E nthusiasm – Find a Sense of Purpose in Work Coherent Corp. is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. Finisar India (Subsidiary of Coherent Corp) is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to gender identity, sexual orientation, race, color, religion, national origin, disability, or any other characteristic protected by law. About Us Coherent is a global leader in lasers, engineered materials and networking components. We are a vertically integrated manufacturing company that develops innovative products for diversified applications in the industrial, optical communications, military, life sciences, semiconductor equipment, and consumer markets. Coherent provides a comprehensive career development platform within an environment that challenges employees to perform at their best, while rewarding excellence and hard-work through a competitive compensation program. It's an exciting opportunity to work for a company that offers stability, longevity and growth. Come Join Us! Note to recruiters and employment agencies: We will not pay for unsolicited resumes from recruiters and employment agencies unless we have a signed agreement and have required assistance, in writing, for a specific opening. LinkedIn

Posted 1 month ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

About the Company Turbolab Technologies is a revenue-positive, bootstrapped startup, and home to a wide range of products and services related to Data Analysis and Web Crawling. Over the past few years, we have expanded into areas such as Machine Learning, Image Processing, Video Analysis, and more. Most of our products cater to enterprise clients, empowering them to harness the power of data to grow their businesses. Job description We are looking for highly motivated and uniquely talented Quality Analysts to join our Data Team. We are looking for someone with the creativity, technical skills, attention to detail, and enthusiasm to join our team and help leverage data by ensuring quality to enable enterprises to grow their business. Go ahead and apply if you are excited to work with us at our Kochi office. Key Responsibilities Regularly audit datasets to find out issues and write reports. Coordinate with the developers to maintain data quality standards. Do routine inspections of automated QA processes and the resulting data. Actively participate in improving data quality workflow and be innovative. Required Skills & Experience Proficiency in Python. Familiarity with packages such as Numpy, Pandas, Matplotlib and/or Seaborn. Good understanding of data file formats (CSV, XML, JSON, etc.) and data processing tools (JupyterHub and OpenRefine) Ability to use basic querying methods (Regex, SQL, and XPath or XQuery) Basic understanding of web technologies (HTML, JavaScript, CSS, etc.) Experience in extracting, cleaning, and structuring data from unstructured or semi-structured sources Good knowledge of databases and ORM tools. Ability to- work independently, prioritize tasks and take initiatives. Ability to document requirements and specifications. Excellent time-management, multi-tasking, communication and interpersonal skills.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies