Jobs
Interviews

1441 Matplotlib Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

12 - 24 Lacs

Bengaluru, Karnataka, India

On-site

About The Company (Industry & Sector) An advanced-technology scale-up at the crossroads of Quantum Computing, Artificial Intelligence and Semiconductor Engineering . The hardware division designs full-stack enterprise quantum computers—spanning superconducting processors, cryogenic control electronics and RF instrumentation—to unlock breakthroughs across life-sciences, finance, transportation and space. Role & Responsibilities Design and execute quantum-device experiments—from cryogenic fixture design to automated data acquisition—for superconducting-qubit characterisation. Develop and refine protocols to measure coherence times, gate fidelities, and perform quantum-state / process tomography, feeding results back into device design. Maintain, troubleshoot and optimise cryogenic measurement stacks and microwave-RF chains to guarantee low-noise, high-throughput data collection. Implement data pipelines in Python / MATLAB that process raw traces into actionable metrics and dashboards for cross-functional teams. Collaborate with quantum-processor, control-electronics and theory groups to correlate empirical results with simulations and accelerate design-of-experiments cycles. Document methodologies, publish findings and help shape the roadmap for next-generation, fault-tolerant quantum processors. Skills & Qualifications Must-Have MSc / MTech / PhD in Physics, Electrical Engineering, Materials Science or related field with quantum focus. Hands-on experience designing cryogenic or microwave testbeds and performing quantum measurements on superconducting qubits or similar platforms. Proven ability to measure and analyse device parameters (T₁/T₂, gate fidelity, tomography). Solid understanding of circuit QED and error-correction concepts relevant to superconducting hardware. Proficiency in Python (NumPy/Pandas/Matplotlib) or MATLAB for data analysis and instrument control. Strong problem-solving, communication and teamwork skills; comfortable in fast-paced R&D settings. Preferred Track record of peer-reviewed publications or conference presentations in quantum technology. Experience writing DoE-driven analysis reports that steer experimentation plans. Familiarity with cold-atom or spin-qubit platforms, autonomous calibration routines, or GPU-accelerated simulators. Knowledge of error-mitigation / bosonic-code techniques and their experimental implementation. Exposure to clean-room fabrication workflows and materials studies for superconducting devices. Contributions to open-source quantum-measurement tooling or instrument-control libraries. Skills: hamiltonian engineering,coherence times measurement,quantum-state tomography,ldpc codes,surface codes,gate fidelity measurement,python-based quantum platforms,circuit qed,automated data acquisition,numerical tool-chains,fault-tolerant architectures,superconducting-qubit error-correction schemes,computational modelling of quantum circuits,data processing in matlab,quantum device characterization,problem-solving,experimental protocols,matlab,error-mitigation techniques,quantum-software stacks,cryogenic fixture design,collaboration,quantum computing,artificial intelligence,data processing in python,quantum error-correction codes,quantum-process tomography,teamwork,python,error-correction concepts,quantum-state & process tomography,communication,qubit-control schemes,semiconductor,peer-reviewed publications,dynamical decoupling,numerical methods

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Internship – Evoastra Ventures Pvt. Ltd. Location: Remote / Hybrid (Hyderabad HQ) Duration: upto 6 Months About Evoastra Ventures Evoastra is a next-generation research and analytics firm delivering high-impact insights across data science, market intelligence, and business strategy. We work with startups, enterprises, and academia to unlock value from data and empower the next generation of talent through real-time projects, mentorship, and innovation-driven learning. Role: Data Scientist Intern As a Data Science Intern at Evoastra, you’ll work on real-world projects involving data cleaning, analysis, predictive modeling, and data-driven storytelling. You’ll gain hands-on experience under expert mentorship and build a strong project portfolio that stands out. Key Responsibilities Assist in data collection, cleaning, and preprocessing from multiple sources Perform Exploratory Data Analysis (EDA) and visualize findings Work with statistical models and machine learning algorithms for predictive analytics Participate in live projects involving real datasets and business problems Collaborate with data scientists, analysts, and project leads on assigned tasks Present insights and outcomes through dashboards or reports Learn to deploy models using beginner-friendly tools (based on internship level) What You Will Learn End-to-end data science project lifecycle Hands-on with tools like Python, Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, Power BI, or Excel Basics of ML algorithms like linear regression, decision trees, clustering, etc. How to work with real-time datasets Basics of model evaluation, feature selection, and deployment strategies Communicating data insights like a professional Eligibility Criteria Open to students and recent graduates from any background (STEM preferred) Basic understanding of Python and statistics is a plus (not mandatory) Passion for data, analytics, and solving real-world problems Willingness to learn and complete project-based tasks on time What You’ll Get ✅ Certificate of Completion (recognized globally) ✅ Project Completion Letter with tools, techniques, and outcomes ✅ Letter of Recommendation based on performance ✅ 1-on-1 Mentorship and support from our experts ✅ Access to Exclusive Discord Community ✅ Stipend eligibility for long-term or top-performing interns ✅ Profile-building guidance (LinkedIn/Resume reviews) Important Note This is a training + internship program . As an authorized provider, you get mentorship, certifications, documentation, and live project hosting. Many of our partner colleges and industries also sponsor this program for their students. How to Apply  Fill the internship form : https://short.evoastra.com/US3AY

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Website: https://www.cognitioanalytics.com/ Cognitio Analytics, founded in 2013, aims to be the preferred provider of AI / ML driven productivity solutions for large enterprises. The company has received awards for its Smart Operations and Total Rewards Analytics Solutions and is dedicated to innovation, R&D, and creating sustained value for clients. Cognitio Analytics has been recognized as a "Great Place to Work" for its commitment to fostering an innovative work environment and employee satisfaction. Our solutions include Total Rewards Analytics powered by Cognitio’s Total Rewards Data Factory, The Total Rewards Analytics solutions help our clients achieve better outcomes and higher ROI on investments in all kinds of Total Rewards programs. Our smart operations solutions drive productivity in complex operations, such as claims processing, commercial underwriting etc. These solutions, based on proprietary capabilities based on AI, advanced process and task mining, and deep understanding of operations drive effective digital transformation for our clients. Ideal qualifications, skills and experiences we are looking for are: - We are actively seeking a talented and results-driven Data Scientist to join our team and take on a leadership role in driving business outcomes through the power of data analytics and insights. - Your contributions will be instrumental in making data-informed decisions, identifying growth opportunities, and propelling our organization to new levels of success. - Doctorate/Master's/bachelor's degree in data science, Statistics, Computer Science, Mathematics, Economics, commerce or a related field. - Minimum of 6 years of experience working as a Data Scientist or in a similar analytical role, with experience leading data science projects and teams. Experience in Healthcare domain with exposure to clinical operations, financial, risk rating, fraud, digital, sales and marketing, and wellness, e-commerce or the ed tech industry is a plus. - Proven ability to lead and mentor a team of data scientists, fostering an innovative environment. Strong decision-making and problem-solving skills to guide strategic initiatives. - Expertise in programming languages such as Python and R, and proficiency with data manipulation, analysis, and visualization libraries (e.g., pandas, NumPy, Matplotlib, seaborn). Very strong python and exceptional with pandas, NumPy, advanced python (pytest, class, inheritance, docstrings). - Deep understanding of machine learning algorithms, model evaluation, and feature engineering. Experience with frameworks like scikit-learn, TensorFlow, or Py torch. Experience of leading a team and handling projects with end-to-end ownership is a must Deep understanding of ML and Deep Learning is a must Basis NLP experience is highly valuable. Pyspark experience is highly valuable. Competitive coding experience (LeetCode) is highly valuable. - Strong expertise in statistical modelling techniques such as regression, clustering, time series analysis, and hypothesis testing. - Experience of building & deploying machine learning models in cloud environment: Microsoft Azure preferred (Databricks, Synapse, Data Factory, etc.) - Basic MLOPs experience with FastAPIs and experience of docker is highly valuable and AI governance - Ability to understand business objectives, market dynamics, and strategic priorities. Demonstrated experience translating data insights into tangible business outcomes and driving data-informed decision-making. - Excellent verbal and written communication skills - Proven experience leading data science projects, managing timelines, and delivering results within deadlines. - Strong collaboration skills with the ability to work effectively in cross-functional teams, build relationships, and foster a culture of knowledge sharing and continuous learning. "Cognitio Analytics is an equal-opportunity employer. We are committed to a work environment that celebrates diversity. We do not discriminate against any individual based on race, color, sex, national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any factors protected by applicable law. All Cognitio employees are expected to understand and adhere to all Cognitio Security and Privacy related policies in order to protect Cognitio data and our client’s data. Our salary ranges are based on paying competitively for our size and industry and are one part of the total compensation package that also includes a bonus plan, equity, benefits, and other opportunities at Cognitio. Individual pay decisions are based on a number of factors, including qualifications for the role, experience level, and skillset."

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

Hyderābād

On-site

Job requisition ID :: 81110 Date: Jul 3, 2025 Location: Hyderabad Designation: Deputy Manager Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Innovation, transformation and leadership occur in many ways. At Deloitte, our ability to help solve clients’ most complex issues is distinct. We deliver strategy and implementation, from a business and technology view, to help you lead in the markets where you compete. Learn more about our Tax Practice. What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration, and high performance. As the undisputed leader in professional services, Deloitte is where you’ll find unrivalled opportunities to succeed and realize your full potential. Deloitte is where you’ll find unrivalled opportunities to succeed and realize your full potential. Work you’ll do Deloitte has institutionalized a new AI and Analytics capability for Tax Technology Consulting, this group is a part of the Deloitte South Asia Tax & Legal function and focuses to embed AI in everything we do, for our clients and for ourselves across all business of Deloitte. You will be engaged in internal projects to disrupt the way we operate and focus on building assets and solutions for our clients, including the latest technologies and methods around predictive models, prescriptive analytics, generative AI etc. We are looking for a highly skilled data scientist to join our dynamic team. The ideal candidate will have a solid background in artificial intelligence and machine learning, with hands-on experience in frameworks such as TensorFlow, PyTorch, scikit-learn, etc. The candidate should possess a deep understanding of data structures, algorithms, and distributed computing. Additionally, experience in deploying machine learning models in production environments, working with various database systems, and familiarity with version control, containerization, and cloud platforms are essential for success in this role. Also, candidates with great storyboarding skills and a penchant to convert AI driven mathematical insights to stories will be given preference. Responsibilities: Collaborate with cross-functional teams to translate business requirements into actual implementation of models, algorithms, and technologies. Execute the product road map and planning of the programs and initiatives as defined by the product owners. Independently solve complex business problems with minimal supervision, while escalating more complex issues to appropriate next level. Develop and maintain software programs, algorithms, dashboards, information tools, and queries to clean, model, integrate and evaluate data sets. Build and optimize pipelines for data intake, validation, and mining as well as modelling and visualization by applying best practices to the engineering of large data sets. Develop and implement advanced machine learning algorithms and models for various applications. Apply the latest advances in deep learning, machine learning and natural language processing to improve performance of legacy models. Customize latest available large language models to develop generative AI solutions for multiple business problems across multiple functional areas. Apply A/B testing framework and test model quality. Experience in taking the models to Production using Cloud Technologies Provide findings and analysis to take informed business decisions. Stay updated with the latest developments in AI/ML technologies and contribute to the continuous improvement of our systems. Requirement: Minimum of 3-7 years of relevant work experience. Master's degree in a related field (Statistics, Mathematics or Computer Science) or MBA in Data Science/AI/Analytics Experience with database systems such as MySQL, PostgreSQL, or MongoDB. Experience in collecting and manipulating structured and unstructured data from multiple data systems (on-premises, cloud-based data sources, APIs, etc) Familiarity with version control systems, preferably Git. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Solid understanding of data structures, algorithms, and distributed computing. Excellent knowledge of Jupyter Notebooks for experimentation and prototyping. Strong programming skills in Python. In-depth understanding of machine learning, deep learning & natural language processing (NLP) algorithms. Experience with popular machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn. Knowledge of containerization tools such as Docker. Experience in deploying machine learning models in production environments. Excellent problem-solving and communication skills. Proficient in using data visualization tools such as Tableau or Matplotlib, or dashboarding packages like Flask, Streamlit. Good working knowledge of MS PowerPoint and storyboarding skills to translate mathematical results to business insights. Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Executive across our organization: o Builds own understanding of our purpose and values; explores opportunities for impact. o Demonstrates strong commitment to personal learning and development; acts as a brand. o ambassador to help attract top talent. o Understands expectations and demonstrates personal accountability for keeping performance on track. o Actively focuses on developing effective communication and relationship-building skills. o Understands how their daily work contributes to the priorities of the team and the business. How you’ll grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to help build world-class skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about for a more detailed description of DTTL and its member firms. This communication is for internal distribution and use only among personnel of Deloitte Touche Tohmatsu Limited, its member firms, and their related entities (collectively, the “Deloitte network”). None of the Deloitte network shall be responsible for any loss whatsoever sustained by any person who relies on this communication. © 2025. For information, contact Deloitte Touche Tohmatsu Limited

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderābād

On-site

Job requisition ID :: 81109 Date: Jul 3, 2025 Location: Hyderabad Designation: Senior Executive Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte delivers deep knowledge of tax and statutory requirements as well as a breadth of experience applying them in practice worldwide. Practical tax advice combined with our consistent tax compliance framework instils confidence that a consistent approach is followed across jurisdictions. We help simplify tax management and oversight while providing global visibility for making informed strategic decisions ― all with the ease of working with a global provider Learn More about our Tax Practice Your work profile Deloitte has institutionalized a new AI and Analytics capability for Tax Technology Consulting, this group is a part of the Deloitte South Asia Tax & Legal function and focuses to embed AI in everything we do, for our clients and for ourselves across all business of Deloitte. You will be engaged in internal projects to disrupt the way we operate and focus on building assets and solutions for our clients, including the latest technologies and methods around predictive models, prescriptive analytics, generative AI etc. We are looking for a highly skilled data scientist to join our dynamic team. The ideal candidate will have a solid background in artificial intelligence and machine learning, with hands-on experience in frameworks such as TensorFlow, PyTorch, scikit-learn, etc. The candidate should possess a deep understanding of data structures, algorithms, and distributed computing. Additionally, experience in deploying machine learning models in production environments, working with various database systems, and familiarity with version control, containerization, and cloud platforms are essential for success in this role. Also, candidates with great storyboarding skills and a penchant to convert AI driven mathematical insights to stories will be given preference. Responsibilities: Collaborate with cross-functional teams to translate business requirements into actual implementation of models, algorithms, and technologies. Execute the product road map and planning of the programs and initiatives as defined by the product owners. Independently solve complex business problems with minimal supervision, while escalating more complex issues to appropriate next level. Develop and maintain software programs, algorithms, dashboards, information tools, and queries to clean, model, integrate and evaluate data sets. Build and optimize pipelines for data intake, validation, and mining as well as modelling and visualization by applying best practices to the engineering of large data sets. Develop and implement advanced machine learning algorithms and models for various applications. Apply the latest advances in deep learning, machine learning and natural language processing to improve performance of legacy models. Customize latest available large language models to develop generative AI solutions for multiple business problems across multiple functional areas. Apply A/B testing framework and test model quality. Experience in taking the models to Production using Cloud Technologies Provide findings and analysis to take informed business decisions. Stay updated with the latest developments in AI/ML technologies and contribute to the continuous improvement of our systems. Requirement: Minimum of 2-4 years of relevant work experience. Master's degree in a related field (Statistics, Mathematics or Computer Science) or MBA in Data Science/AI/Analytics Experience with database systems such as MySQL, PostgreSQL, or MongoDB. Experience in collecting and manipulating structured and unstructured data from multiple data systems (on-premises, cloud-based data sources, APIs, etc) Familiarity with version control systems, preferably Git. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Solid understanding of data structures, algorithms, and distributed computing. Excellent knowledge of Jupyter Notebooks for experimentation and prototyping. Strong programming skills in Python. In-depth understanding of machine learning, deep learning & natural language processing (NLP) algorithms. Experience with popular machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn. Knowledge of containerization tools such as Docker. Experience in deploying machine learning models in production environments. Excellent problem-solving and communication skills. Proficient in using data visualization tools such as Tableau or Matplotlib, or dashboarding packages like Flask, Streamlit. Good working knowledge of MS PowerPoint and storyboarding skills to translate mathematical results to business insights. How you’ll grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to help build world-class skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about for a more detailed description of DTTL and its member firms. This communication is for internal distribution and use only among personnel of Deloitte Touche Tohmatsu Limited, its member firms, and their related entities (collectively, the “Deloitte network”). None of the Deloitte network shall be responsible for any loss whatsoever sustained by any person who relies on this communication. © 2025. For information, contact Deloitte Touche Tohmatsu Limited

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description AI engineer with Python experience developing applications powered by LLMs and integrating with data warehouse like GCP Big Query & other standard data sources. Responsibilities Design, develop, and maintain core functionalities and backend services using Python, focusing on AI and LLM integration. Integrate Large Language Models (LLMs) such as OpenAI, GPT, Llama, or others into applications to create intelligent, AI-powered features. Explore and apply LLM capabilities, including summarization, classification, RAG (Retrieval-Augmented Generation), prompt engineering, and prompt pipelines. Develop and implement efficient data processing pipelines for structured and unstructured data, ensuring data quality for AI models. Collaborate with cross-functional teams (e.g., product managers, data scientists, DevOps) to define, design, and ship new AI features and integrate LLMs effectively. Write clean, maintainable, well-tested, and well-documented Python code, adhering to best practices and coding standards. Ensure the reliability, performance, scalability, and security of AI/LLM-based applications, identifying and correcting bottlenecks. Conduct technical analysis of tasks, participate actively in scrum meetings, and deliver value committed for sprints. Stay up-to-date with the latest advancements in generative AI, LLM architectures, machine learning, and related technologies, sharing insights with the team. Participate in code reviews, contribute to technical improvements, and assist in troubleshooting and debugging issues. Qualifications Required Skills and Qualifications: 3+ years of proven experience in Python software development (Full stack or Backend), with a strong emphasis on backend development. Strong knowledge on Python data structures and algorithms Proficiency with Python and relevant libraries such as Pandas, NumPy, SciPy, scikit-learn, PyTorch, TensorFlow, Matplotlib etc. Solid understanding of machine learning concepts, algorithms. Experience with REST APIs and building scalable backend services. Familiarity with database technologies (e.g., PostgreSQL, MongoDB, SQL/NoSQL). Familiarity/experience with Cloud technologies(AWS, GCP, Azure etc.). Experience with version control systems, particularly Git. Strong problem-solving skills, analytical abilities, and attention to detail. Excellent communication and collaboration skills, with the ability to explain complex technical concepts clearly. Preferred Skills and Qualifications (Nice to Have): Hands-on experience with Large Language Models (LLMs) using RAG and their application in real-world scenarios. Familiarity with data quality and data governance concepts. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related technical field.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mohali

Remote

Job Title: AI Trainer (Project-Based / Freelance) Location: On-Site Job Type: Project-Based / Freelance Experience Required: 2+ years in AI/ML training or practical AI development Job Summary: We are seeking a skilled AI Trainer on a project basis to conduct hands-on training sessions for interns and new employees. The ideal candidate should have strong expertise in AI tools and the ability to explain AI/ML concepts clearly while providing practical training. Key Responsibilities: Deliver project-based training programs on AI/ML fundamentals and tools. Train interns and freshers on essential AI tools and frameworks including: Python and libraries like NumPy, Pandas, Scikit-learn, TensorFlow, Keras, PyTorch Jupyter Notebook / Google Colab for practical coding OpenAI tools (e.g., ChatGPT, API usage) NLP libraries such as spaCy, NLTK Data visualization tools like Matplotlib, Seaborn Version control using Git/GitHub Prepare training materials, hands-on assignments, and evaluations. Provide feedback and support to trainees during the training period. Update training content based on latest AI developments. Work remotely with flexible hours, delivering sessions according to project schedules. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI or related field. Minimum 2 years’ experience in AI/ML development and/or training. Proven ability to train or mentor beginners in AI tools and technologies. Excellent communication and presentation skills. Self-motivated and able to manage training projects independently. Preferred: Experience with cloud AI platforms (AWS, GCP, Azure). Knowledge of Generative AI tools. Relevant certifications in AI/ML training. Job Types: Contractual / Temporary, Freelance Contract length: 3 months Pay: From ₹2,000.00 per month Schedule: Day shift Morning shift Supplemental Pay: Commission pay Language: English (Preferred) Work Location: In person

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Requirements Job Requirements Role/ Job Title: Senior Data Scientist Function/ Department: Data & Analytics Job Purpose In this specialized role, you will leverage your expertise in machine learning and statistics to derive valuable insights from data. Your role will include developing predictive models, interpreting data and working closely with out ML engineers to ensure the effective deployment and functioning of these models. Key / Primary Responsibilities Lead cross-functional teams in the design, development, and deployment of Generative AI solutions, with a strong focus on Large Language Models (LLMs). Architect, train, and fine-tune state-of-the-art LLMs (e.g., GPT, BERT, T5) for various business applications, ensuring alignment with project goals. Deploy and scale LLM-based solutions, integrating them seamlessly into production environments and optimizing for performance and efficiency. Develop and maintain machine learning workflows and pipelines for training, evaluating, and deploying Generative AI models, using Python or R, and leveraging libraries like Hugging Face Transformers, TensorFlow, and PyTorch. Collaborate with product, data, and engineering teams to define and refine use cases for LLM applications such as conversational agents, content generation, and semantic search. Design and implement fine-tuning strategies to adapt pre-trained models to domain-specific tasks, ensuring high relevance and accuracy. Evaluate and optimize LLM performance, including handling challenges such as prompt engineering, inference time, and model bias. Manage and process large, unstructured datasets using SQL and NoSQL databases, ensuring smooth integration with AI models. Build and deploy AI-driven APIs and services, providing scalable access to LLM-based solutions. Use data visualization tools (e.g., Matplotlib, Seaborn, Tableau) to communicate AI model performance, insights, and results to non-technical stakeholders. Secondary Responsibilities Contribute to data analysis projects, with a strong emphasis on text analytics, natural language understanding, and Generative AI applications. Build, validate, and deploy predictive models specifically tailored to text data, including models for text generation, classification, and entity recognition. Handle large, unstructured text datasets, performing essential preprocessing and data cleaning steps, such as tokenization, lemmatization, and noise removal, for machine learning and NLP tasks. Work with cutting-edge text data processing techniques, ensuring high-quality input for training and fine-tuning Large Language Models (LLMs). Collaborate with cross-functional teams to develop and deploy scalable AI-powered solutions that process and analyze textual data at scale. Key Success Metrics Ensure timely deliverables. Spot Training Infrastructure fixes. Lead technical aspects of the projects. Error free deliverables. Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA Experience: 5-10 years of relevant experience

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai

On-site

AI engineer with Python experience developing applications powered by LLMs and integrating with data warehouse like GCP Big Query & other standard data sources. Required Skills and Qualifications: 3+ years of proven experience in Python software development (Full stack or Backend), with a strong emphasis on backend development. Strong knowledge on Python data structures and algorithms Proficiency with Python and relevant libraries such as Pandas, NumPy, SciPy, scikit-learn, PyTorch, TensorFlow, Matplotlib etc. Solid understanding of machine learning concepts, algorithms. Experience with REST APIs and building scalable backend services. Familiarity with database technologies (e.g., PostgreSQL, MongoDB, SQL/NoSQL). Familiarity/experience with Cloud technologies(AWS, GCP, Azure etc.). Experience with version control systems, particularly Git. Strong problem-solving skills, analytical abilities, and attention to detail. Excellent communication and collaboration skills, with the ability to explain complex technical concepts clearly. Preferred Skills and Qualifications (Nice to Have): Hands-on experience with Large Language Models (LLMs) using RAG and their application in real-world scenarios. Familiarity with data quality and data governance concepts. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related technical field. Design, develop, and maintain core functionalities and backend services using Python, focusing on AI and LLM integration. Integrate Large Language Models (LLMs) such as OpenAI, GPT, Llama, or others into applications to create intelligent, AI-powered features. Explore and apply LLM capabilities, including summarization, classification, RAG (Retrieval-Augmented Generation), prompt engineering, and prompt pipelines. Develop and implement efficient data processing pipelines for structured and unstructured data, ensuring data quality for AI models. Collaborate with cross-functional teams (e.g., product managers, data scientists, DevOps) to define, design, and ship new AI features and integrate LLMs effectively. Write clean, maintainable, well-tested, and well-documented Python code, adhering to best practices and coding standards. Ensure the reliability, performance, scalability, and security of AI/LLM-based applications, identifying and correcting bottlenecks. Conduct technical analysis of tasks, participate actively in scrum meetings, and deliver value committed for sprints. Stay up-to-date with the latest advancements in generative AI, LLM architectures, machine learning, and related technologies, sharing insights with the team. Participate in code reviews, contribute to technical improvements, and assist in troubleshooting and debugging issues.

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Chennai

On-site

Job Summary: We are looking for a skilled Python Developer with 7 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Job Features Job Category Software Division

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 4 Lacs

Chennai

On-site

Job Summary: We are looking for a skilled Python Developer with 3 to 6 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Job Features Job Category Software Division

Posted 1 month ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Role Overview We are looking for a confident Security Engineer/Researcher position with experience in IT-Security for our Core Research labs in India. McAfee believes that no one person, product, or organization can fight cybercrime alone. It's why we rebuilt McAfee around the idea of working together. Life at McAfee is full of possibility. You’ll have the freedom to explore challenges, take smart risks, and reach your potential in one of the fastest-growing industries in the world. You’ll be part of a team that supports and inspires you. This is a hybrid position based in Bangalore. You must be within a commutable distance from the location. You will be required to be onsite on an as-needed basis; when not working onsite, you will work remotely from your home location About The Role Understand threat telemetry trends and identify patterns to reduce time to detect. Develop automation to harvest malware threat intelligence from various sources such as product telemetry, OSINT, Dark Web monitoring, spam monitoring, etc. Develop early identification and alert systems for threats based on various online platforms and product telemetry. Utilize various data mining tools that analyze data inline based on intelligence inputs. Analyze malware communication and techniques to find Indicators of Compromise (IOC) or Indicators of Attack (IOA). Authoring descriptions for malware either via McAfee Virus Information Library, Threat Advisories, Whitepapers, or Blogs. About You You should have 7+ years of experience as a security/threat/malware analyst. Programming Skills—Knowledge of programming languages like Python and its packages like NumPy, Matplotlib, and Seaborn is desirable. Data source accesses like Spark and SQL are desirable. Machine Learning knowledge is added advantage. Familiarity with UI & dashboard tools like Jupyter and Databricks is an added advantage. Excellent Communication Skills—It is incredibly important to describe findings to a technical and non-technical audience. Company Overview McAfee is a leader in personal security for consumers. Focused on protecting people, not just devices, McAfee consumer solutions adapt to users’ needs in an always online world, empowering them to live securely through integrated, intuitive solutions that protects their families and communities with the right security at the right moment. Company Benefits And Perks We work hard to embrace diversity and inclusion and encourage everyone at McAfee to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Bonus Program Pension and Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why McAfee prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a fast-growing edtech startup offering hands-on, project-based virtual internships designed to prepare students and fresh graduates for today’s tech-driven industry. The Data Science Internship focuses on real-world applications of machine learning, statistics, and data engineering to solve meaningful problems. 🚀 Internship Overview As a Data Science Intern , you'll explore large datasets, build models, and deliver predictive insights. You'll work with machine learning algorithms , perform data wrangling , and communicate your results with visualizations and reports. 🔧 Key Responsibilities Collect, clean, and preprocess structured and unstructured data Apply machine learning models for regression, classification, clustering, and NLP Work with tools like Python, Jupyter Notebook, Scikit-learn, TensorFlow , and Pandas Conduct exploratory data analysis (EDA) to discover trends and insights Visualize data using Matplotlib, Seaborn , or Power BI/Tableau Collaborate with other interns and mentors in regular review and feedback sessions Document your work clearly and present findings to the team ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, Statistics, or a related field Proficiency in Python and understanding of libraries such as Pandas, NumPy, Scikit-learn Basic knowledge of machine learning algorithms and statistical concepts Familiarity with data visualization tools and SQL Problem-solving mindset and keen attention to detail Enthusiastic about learning and applying data science to real-world problems 🎓 What You’ll Gain Hands-on experience working with real datasets and ML models A portfolio of projects that demonstrate your data science capabilities Internship Certificate upon successful completion Letter of Recommendation for top-performing interns Opportunity for a Full-Time Offer based on performance Exposure to industry-standard tools, workflows, and best practices

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Job Title: Senior AI/ML Developer Experience: 8-10 Years Location: [Mumbai] Job Type: [Full-Time] Key Responsibilities:  Lead the design and development of machine learning models using Python, TensorFlow, and other AI/ML frameworks.  Build, train, and optimize machine learning models to improve business processes and outcomes.  Work with large datasets in distributed environments using PySpark, Hadoop, and HIVE.  Analyze and preprocess data, clean datasets, and implement feature engineering techniques.  Collaborate with data scientists, engineers, and product teams to deliver AI-powered solutions.  Conduct model analysis and performance evaluations, ensuring the accuracy and effectiveness of ML models.  Maintain and document machine learning workflows and processes in a collaborative environment.  Utilize Git for version control and JIRA for task management and project tracking.  Continuously monitor and improve model performance, ensuring scalability and efficiency.  Stay updated with the latest trends and advancements in AI/ML to enhance development capabilities. Skills and Qualifications:  8-10 years of experience in AI/ML development with a strong focus on model building and analysis.  Strong proficiency in Python for developing machine learning algorithms and solutions.  Experience with PySpark, Hadoop, and HIVE for working with large datasets in a distributed environment.  Hands-on experience with TensorFlow for deep learning model development and training.  Solid understanding of machine learning algorithms, techniques, and frameworks.  Experience with notebooks (e.g., Jupyter) for model development, analysis, and experimentation.  Strong skills in model evaluation, performance tuning, and optimization.  Proficiency with Git for version control and JIRA for project management.  Excellent problem-solving skills and attention to detail.  Strong communication skills and ability to collaborate with cross-functional teams. Preferred Qualifications:  Experience with cloud platforms (AWS, Azure, GCP) for machine learning deployment.  Knowledge of additional AI/ML frameworks such as Keras, Scikit-learn, or PyTorch.  Familiarity with CI/CD pipelines and DevOps practices for machine learning.  Experience with data visualization tools and libraries (e.g., Matplotlib, Seaborn).  Background in statistical analysis and data mining.

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Title: Lead Data Scientist Mode of work : Remote Responsibilities Design and implement data-driven solutions to optimize customer experience metrics, reduce churn, and enhance customer satisfaction using statistical analysis, machine learning, and predictive modeling. Collaborate with CX teams, contact center operations, customer success, and product teams to gather requirements, understand customer journey objectives, and translate them into actionable analytical solutions. Perform exploratory data analysis (EDA) on customer interaction data, contact center metrics, survey responses, and behavioral data to identify pain points and opportunities for CX improvement. Build, validate, and deploy machine learning models for customer sentiment analysis, churn prediction, next-best-action recommendations, contact center forecasting, and customer lifetime value optimization. Develop CX dashboards and reports using BI tools to track key metrics like NPS, CSAT, FCR, AHT, and customer journey analytics to support strategic decision-making. Optimize model performance for real-time customer experience applications through hyperparameter tuning, A/B testing, and continuous performance monitoring. Contribute to customer data architecture and pipeline development to ensure scalable and reliable customer data flows across touchpoints (voice, chat, email, social, web). Document CX analytics methodologies, customer segmentation strategies, and model outcomes to ensure reproducibility and enable knowledge sharing across CX transformation initiatives. Mentor junior data scientists and analysts on CX-specific use cases, and participate in code reviews to maintain high-quality standards for customer-facing analytics. Skill Requirements Proven experience (at least 6+ years) in data science, analytics, and statistical modeling with specific focus on customer experience, contact center analytics, or customer behavior analysis, including strong understanding of CX metrics, customer journey mapping, and voice-of-customer analytics. Proficiency in Python and/or R for customer data analysis, sentiment analysis, and CX modeling applications. Experience with data analytics libraries such as pandas, NumPy, scikit-learn, and visualization tools like matplotlib, seaborn, or Plotly for customer insights and CX reporting. Experience with machine learning frameworks such as Scikit-learn, XGBoost, LightGBM, and familiarity with deep learning libraries (TensorFlow, PyTorch) for NLP applications in customer feedback analysis and chatbot optimization. Solid understanding of SQL and experience working with customer databases, contact center data warehouses, and CRM systems (e.g., PostgreSQL, MySQL, SQL Server, Salesforce, ServiceNow). Familiarity with data engineering tools and frameworks (e.g., Apache Airflow, dbt, Spark, or similar) for building and orchestrating customer data ETL pipelines and real-time streaming analytics. (Good to have) Knowledge of data governance, data quality frameworks, and data lake architectures. (good to have) Exposure to business intelligence (BI) tools such as Power BI, Tableau, or Looker for CX dashboarding, customer journey visualization, and executive reporting on customer experience metrics. Working knowledge of version control systems (e.g., Git) and collaborative development workflows for customer analytics projects. Strong problem-solving skills with customer-centric analytical thinking, and the ability to work independently and as part of cross-functional CX transformation teams. Excellent communication and presentation skills, with the ability to explain complex customer analytics concepts to non-technical stakeholders including CX executives, contact center managers, and customer success teams. Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Salary - 30K CTC TO 35K CTC Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity — and take your career to new height s! 🔍 We Are Hiring: Python with Gen AI Only Chennai local candidates required and face to face Interview mandatory. Skillset: Python – 4+ Yrs experience GEN AI – 2 Yrs Experience Experience in RAG, Vector Store and Azure open AI Open position - 4 Location - Chennai (Hybrid) Interview - F2F - Chennai DLF • Technical skills • 5 to 6 years’ Experience in developing Python frameworks such DL, ML, Fast API, Flask • At least 3 years of experience in developing generative AI models using python and relevant frameworks. • Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. • Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. • Familiar with version control, testing, debugging, and deployment tools. • Excellent communication and problem-solving skills. • Curious and eager to learn new technologies and domains. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Python with Gen AI: Current Loc Current CTC: Expected CTC: Notice Period: 🔒 We assure you that your profile will be handle d with strict confiden tial ity. 📩 Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in

Posted 1 month ago

Apply

7.0 years

0 Lacs

Delhi, India

On-site

Role Expectations: Data Collection and Cleaning: Collect, organize, and clean large datasets from various sources (internal databases, external APIs, spreadsheets, etc.). Ensure data accuracy, completeness, and consistency by cleaning and transforming raw data into usable formats. Data Analysis: Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies. Conduct statistical analysis to support decision-making and uncover insights. Use analytical methods to identify opportunities for process improvements, cost reductions, and efficiency enhancements. Reporting and Visualization: Create and maintain clear, actionable, and accurate reports and dashboards for both technical and non-technical stakeholders. Design data visualizations (charts, graphs, and tables) that communicate findings effectively to decision-makers. Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience in generating the Descriptive , Predictive & prescriptive Insights with Gen AI using MS Copilot in PowerBI. Experience in Prompt Engineering & RAG Architectures Prepare reports for upper management and other departments, presenting key findings and recommendations. Collaboration: Work closely with cross-functional teams (marketing, finance, operations, etc.) to understand their data needs and provide actionable insights. Collaborate with IT and database administrators to ensure data is accessible and well-structured. Provide support and guidance to other teams regarding data-related questions or issues. Data Integrity and Security: Ensure compliance with data privacy and security policies and practices. Maintain data integrity and assist with implementing best practices for data storage and access. Continuous Improvement: Stay current with emerging data analysis techniques, tools, and industry trends. Recommend improvements to data collection, processing, and analysis procedures to enhance operational efficiency. Qualifications: Education: Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. A Master's degree or relevant certifications (e.g., in data analysis, business intelligence) is a plus. Experience: Proven experience as a Data Analyst or in a similar analytical role (typically 7+ years). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Strong knowledge of SQL and experience with relational databases. Familiarity with data manipulation and analysis tools (e.g., Python, R, Excel, SPSS). Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Technical Skills: Proficiency in SQL and data query languages. Knowledge of statistical analysis and methodologies. Experience with data visualization and reporting tools. Knowledge of data cleaning and transformation techniques. Familiarity with machine learning and AI concepts is an advantage (for more advanced roles). Soft Skills: Strong analytical and problem-solving abilities. Excellent attention to detail and ability to identify trends in complex data sets. Good communication skills to present data insights clearly to both technical and non-technical audiences. Ability to work independently and as part of a team. Strong time management and organizational skills, with the ability to prioritize tasks effectively.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are seeking a highly skilled and motivated Data Analyst with strong Python programming skills to join our growing team. The ideal candidate will be passionate about uncovering insights from data and using those insights to drive business decisions. You will be responsible for collecting, analyzing, and interpreting complex datasets, developing data-driven solutions, and communicating findings to : Collect data from various sources, including databases, APIs, and other data repositories. Perform data cleaning, transformation, and manipulation using Python libraries such as Pandas and NumPy. Conduct exploratory data analysis (EDA) to identify trends, patterns, and anomalies in the data. Develop and implement statistical models and machine learning algorithms using Python libraries like Scikit-learn to solve business problems. Create data visualizations using Python libraries such as Matplotlib and Seaborn to communicate insights effectively. Build and maintain data pipelines to automate data extraction, transformation, and loading (ETL) processes. Collaborate with cross-functional teams, including product, engineering, and marketing, to understand their data needs and provide actionable insights. Develop and maintain documentation of data analysis processes and results. Stay up-to-date with the latest trends and technologies in data analysis and Python programming. Present data findings and recommendations to stakeholders in a clear and concise manner. Skills And Qualifications Bachelor's degree in a quantitative field such as Statistics, Mathematics, Economics, Computer Science, or a related field. Proven experience 5+ years as a Data Analyst. Strong proficiency in Python programming, including experience with data analysis libraries such as Pandas, NumPy, and Scikit-learn. Experience with data visualization libraries such as Matplotlib and Seaborn. Solid understanding of SQL and relational databases. Experience with data warehousing and ETL processes is a plus. Strong analytical and problem-solving skills. Excellent communication and presentation skills. Ability to work independently and as part of a team. Strong business acumen and the ability to translate data insights into business Qualifications : Master's degree in a relevant field. Experience with cloud-based data platforms such as AWS, Azure, or GCP. Experience with big data technologies such as Spark or Hadoop. Knowledge of statistical modeling techniques. Experience with machine learning algorithms. (ref:hirist.tech)

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Data : Bachelor's or master's degree in computer science, Statistics, Mathematics, or a related field Experience : 6 To 10 Year(s) Skill set : Artificial Intelligence / Machine Learning Domain Knowledge Strong working knowledge of the Google Cloud Platform (GCP) and its AI/ML services Proven experience in chatbot creation and development using relevant frameworks Proven experience in developing and implementing machine learning models. Strong programming skills in Python, with expertise in Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Keras , Matplotlib and Seaborn. Proficiency in SQL querying and database management. Experience with front-end frameworks such as React or Angular and CSS. Experience with back-end frameworks such as Django, Flask, or FastAPI. Experience in prompt engineering for large language models (LLMs), including prompt design, optimization, and evaluation. Strong problem-solving skills and the ability to translate business requirements into technical solutions. Excellent communication and presentation skills, with the ability to explain complex concepts to non-technical audiences. Job Description Deploy and manage AI/ML applications on the Google Cloud Platform (GCP) Design, develop, and implement conversational AI solutions using various chatbot frameworks and platforms Design, develop, and optimize prompts for large language models (LLMs) to achieve desired outputs Develop and implement machine learning models using supervised, unsupervised, and reinforcement learning algorithms. Utilize Python with Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, and Keras to build and deploy machine learning solutions. Create visualizations using Matplotlib and Seaborn to communicate insights Write and optimize SQL queries to extract and manipulate data from various databases. Develop and maintain web applications and APIs using Python frameworks such as Django, Flask, or FastAPI. Build user interfaces using JavaScript frameworks such as React or Angular, along with CSS. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate complex technical concepts and findings to non-technical stakeholders through presentations and reports. (ref:hirist.tech)

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Skills: Python, PyTorch, aws, Data Visualization, Machine Learning, ETL, Experience: 2 4 Years Location: Bangalore (In-office) Employment Type: Full-Time About The Role We are hiring a Junior Data Scientist to join our growing data team in Bangalore. Youll work alongside experienced data professionals to build models, generate insights, and support analytical solutions that solve real business problems. Responsibilities Assist in data cleaning, transformation, and exploratory data analysis (EDA). Develop and test predictive models under guidance from senior team members. Build dashboards and reports to communicate insights to stakeholders. Work with cross-functional teams to implement data-driven initiatives. Stay updated with modern data tools, algorithms, and techniques. Requirements 24 years of experience in a data science or analytics role. Proficiency in Python or R, SQL, and key data libraries (Pandas, NumPy, Scikit-learn). Experience with data visualization tools (Matplotlib, Seaborn, Tableau, Power BI). Basic understanding of machine learning algorithms and model evaluation. Strong problem-solving ability and eagerness to learn. Good communication and teamwork skills.

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Chennai

Work from Office

Data Engineer Experience Range: 05 - 12 years Location of Requirement: Chennai Desired Candidate Profile: Languages: Python, R, SQL, T-SQL Visualisation: Tableau, Power BI, Matplotlib, Looker Big Data: Hadoop, Spark Skills: Database Performance, Query Tuning, Schema Designing, Dataset aggregation, query optmization

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Data Engineer Experience Range: 05 - 12 years Location of Requirement: Chennai Desired Candidate Profile: Languages: Python, R, SQL, T-SQL Visualisation: Tableau, Power BI, Matplotlib, Looker Big Data: Hadoop, Spark Skills: Database Performance, Query Tuning, Schema Designing, Dataset aggregation, query optmization

Posted 1 month ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Data Analyst Location: Marol, Mumbai (Work from Office) Company: India On Track Industry: Sports Job Type: Full-time Experience Level: 1-2 Years Joining Date: Within a month About India on Track: India on Track (IOT) has been set up to inculcate a culture of sport amongst the youth and create a platform to learn and participate in various disciplines. Working in a sector that is driven by passion, we are unwavering in our commitment to help India recognize and celebrate the power of sports, be it recreational or professional. IOT wants to equip India’s next generation with the tools to help build a healthier, stronger nation that would have its roots as much in sports as other disciplines. We make this simple idea come to life through our grassroot initiatives executed in a secure environment using world class training and conditioning techniques. To ensure this, IOT partners with top international sporting entities (like NBA Basketball Schools & LaLiga Academy School, amongst others) to bring best-in-class sports thinking and philosophy to India. Each partnership focuses on the amalgamation of the technical expertise of these leaders in world sport and IOT's management experience and vision for India. In India, IOT runs 85+ Centres across 14 Cities, has over 80 Coaches and trains over 20K kids. IOT also runs Residential International Development Programs in Portugal & Spain, and its fast expanding in other regions. About The Role: The Data Analyst is responsible for analyzing and interpreting complex sports data to provide actionable insights. This includes working with athlete performance data, coach analytics, and business metrics to support decision-making across various sports-related operations. The role involves using tools such as SQL , Excel , Tableau , and Power BI to query databases, create dashboards, and generate reports that provide insights into player performance, team dynamics, and business outcomes. Python serves as an additional tool for automating data processes and performing advanced statistical analyses to uncover deeper insights from the data. Responsibilities 1. Data Collection & Querying (SQL) · Write efficient SQL queries to extract data from relational databases. · Create and optimize complex queries, including joins, subqueries, CTEs, and aggregations. · Ensure accurate data extraction and troubleshoot any issues in data queries. · Work with large datasets and manage database connections to maintain data integrity. 2. Data Cleaning & Transformation (Excel) · Use Excel for data preparation, including cleaning, transforming, and organizing data. · Apply advanced Excel functions (e.g., VLOOKUP, INDEX-MATCH, IF Statements) to manipulate and analyze data. · Build and maintain PivotTables and PivotCharts to summarize and visualize data trends. · Use Power Query and Power Pivot for more advanced data manipulation tasks in Excel. 3. Reporting & Visualization (Tableau & Power BI) · Develop interactive dashboards, reports, and visualizations using Tableau and Power BI. · Design reports that highlight key business metrics, trends, and insights. · Use advanced visualization features like calculated fields (Tableau), DAX measures (Power BI), and dynamic filters. · Automate report refresh cycles to ensure real-time data access in Tableau and Power BI. 4. Automation & Data Integration (Python - Optional but Valuable) · Utilize Python (primarily pandas and matplotlib) for data automation and visualization. · Write scripts to automate routine data processing tasks and data extraction from APIs. · Enhance data transformation processes that are not feasible directly in Excel or SQL. · Implement Python to clean and prepare data prior to creating reports and visualizations. 5. Collaboration & Communication · Partner with business teams to understand reporting needs and translate them into actionable data insights. · Present findings through clear visualizations and data-driven recommendations. · Provide ad-hoc data analysis and reporting to support business decisions. Compensation Subject to experience and performance, the expected compensation will fall in the range of 5-8 Lacs Per Annum.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies