Home
Jobs

2323 Numpy Jobs - Page 39

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

30.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Schrödinger is a science and technology leader with over 30 years of experience developing software solutions for physics-based and machine learning-based chemical simulations and predictive analyses. We’re seeking an application-focused Materials Informatics & Optoelectronics Scientist to join us in our mission to improve human health and quality of life through the development, distribution, and application of advanced computational methods. As a member of our Materials Science team, you’ll have the opportunity to work on diverse projects in optoelectronics, catalysis, energy storage, semiconductors, aerospace, and specialty chemicals. Who Will Love This Job A statistical and machine learning expert with robust problem-solving skills A materials science enthusiast who’s familiar with RDkit, MatMiner, Dscribe, or other informatics packages A proficient Python programmer and debugger who’s familiar with machine learning packages like Scikit-Learn, Pandas, NumPy, SciPy, and PyTorch An experienced researcher with hands-on experience in extracting datasets using large language models (LLM) or Optical Character Recognition (OCR) technologies A specialist in quantum chemistry or materials science who enjoys collaborating with an interdisciplinary team in a fast-paced environment What You’ll Do Research, curate, and analyze datasets from literature and other sources using advanced techniques such as LLMs and OCR. Work with domain experts to ensure the accuracy and quality of data (such as molecular structures, SMILES strings, experimental measurements, etc) Develop and validate predictive machine learning models for OLED devices and other optoelectronic applications Communicate results and present ideas to the team Develop tools and workflows that can be integrated into commercial software products Validate existing Schrödinger machine learning products using public datasets or internally generated datasets What You Should Have A PhD in Chemistry or Materials Science Hands-on experience with the application of machine learning, neural networks, deep learning, data analysis, or chemical informatics to materials and complex chemicals Experience with LLM, OCR technologies, and the extraction of datasets for ML model development As an equal opportunity employer, Schrödinger hires outstanding individuals into every position in the company. People who work with us have a high degree of engagement, a commitment to working effectively in teams, and a passion for the company's mission. We place the highest value on creating a safe environment where our employees can grow and contribute, and refuse to discriminate on the basis of race, color, religious belief, sex, age, disability, national origin, alienage or citizenship status, marital status, partnership status, caregiver status, sexual and reproductive health decisions, gender identity or expression, or sexual orientation. To us, "diversity" isn't just a buzzword, but an important element of our core principles and key business practices. We believe that diverse companies innovate better and think more creatively than homogenous ones because they take into account a wide range of viewpoints. For us, greater diversity doesn't mean better headlines or public images - it means increased adaptability and profitability. Show more Show less

Posted 1 week ago

Apply

2.0 - 7.0 years

6 - 9 Lacs

Noida

Work from Office

Naukri logo

We are looking for an experienced and passionate Data Science Trainer who can deliver high-quality training to students, working professionals, and corporate clients. The ideal candidate should be proficient in the latest tools and technologies in the data science field and have a passion for teaching and mentoring. Key Responsibilities: Deliver Engaging Training Sessions: Conduct in-depth, interactive training sessions (online/offline) on core Data Science topics, including Python programming, statistics, machine learning, and data visualization. Curriculum Development: Design, structure, and regularly update course content, projects, and assessments based on current industry standards and student needs. Hands-On Project Guidance: Mentor students on capstone and real-time projects using real-world datasets to strengthen their practical skills and portfolio. Technical Evaluation: Develop and assess assignments, quizzes, and case studies to measure students progress and provide constructive feedback. Interview Preparation: Conduct mock interviews, technical tests, and soft skills sessions to prepare students for job placements in data-related roles. Technology Upgradation: Stay updated with evolving tools and technologies like TensorFlow, Power BI, Tableau, Spark, and integrate them into training modules as needed. Mentorship & Support: Provide personalized mentorship and career guidance to help learners overcome challenges and reach their goals. Corporate & Workshop Training (Optional): Conduct specialized workshops, webinars, and corporate training sessions based on demand. Key Skills Required: Strong knowledge of Python , NumPy , Pandas , Matplotlib , Seaborn Proficiency in Machine Learning (Supervised & Unsupervised algorithms) Hands-on experience with Scikit-learn , TensorFlow , or Keras Familiarity with SQL , Statistics , Data Wrangling , and Data Visualization Exposure to Deep Learning , NLP , and Big Data Tools is a plus Experience with platforms like Jupyter Notebook , Google Colab Strong communication and presentation skills Why Join Uncodemy? Work with a passionate and skilled team of trainers. Opportunity to shape the careers of aspiring professionals. Flexible working hours and a positive work environment.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

What You’ll Do ● Support business decision making by deriving actionable insights from structured and unstructured data ● Conduct end-to-end machine learning and data analytics activities, including data wrangling, feature engineering, building and evaluating ML models. ● Research and implement novel machine learning approaches to enable informed business decisions. ● Collaborate with business and engineering teams to analyze, extract, normalize, and label relevant data. ● Perform data engineering, feature engineering, data preprocessing to train and tune ML models ● Utilize BI tools to prepare reports, analyze and visualize data. ● Implement MLOps practices such as CI/CD and continuous training (CT) for deploying and maintaining ML models in production environments. Your Superpowers: ● Experience ○ 3 to 5 years of experience in data analytics and machine learning ○ Hands-on experience with ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, Pandas and NumPy. ○ Proficiency in writing production-quality code in Python, with a strong understanding of object-oriented programming principles ○ Proven experience in analyzing and working with large datasets ○ Hands on experience with various BI tools ○ Expertise in applying both supervised and unsupervised learning techniques 3 ● Expertise & Learning Agility ○ You are open to new ways of thinking and committed to acquiring new skills to retain a competitive advantage ○ You employ a thoughtful process of analyzing data and problem solving to reach well-reasoned solutions ● Strong collaboration, communication, and creative thinking skills Join Electrifi Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Let’s be #BrilliantTogether ISS STOXX is actively hiring a Data Analyst in C#/.NET and SQL for Mumbai (Goregaon East) location. Overview Do you have a passion for using technology to turn raw data into polished insights? Are you a wiz at transforming information into compelling visualizations? Do you have a knack for automating the production of reports? If so, this role is for you. The successful candidate will write complex queries using Microsoft SQL Server and Python to distill terabytes of proprietary data to automate meaningful analytical insights in finished PowerPoint client deliverables through new and existing .NET code. Additionally, you will help us generate custom reports using client-supplied proprietary data, assist the advisory team with content creation, and fulfill ad-hoc requests from the sales team and the media. Over time, you will help us identify and implement new ways to analyze and present information for new deliverables that leverage our extensive repository of compensation, governance, and sustainability data. Shift hours: 12 PM to 9 PM IST Responsibilities Maintain and support a growing suite of reports. Help create new reports, tools, and deliverables to support new and existing lines of business. Produce tailored reports by request of a specific client or internal customer. Extract data from one the world’s most robust collection of executive compensation, corporate governance, and corporate sustainability datasets. Use Python to create data analyses and data visualizations to integrate into our existing workflow. Qualifications Bachelors degree or associates degree with at least 3 years of relevant professional experience; at least 1 year of experience in a similar role or function Microsoft SQL querying skills and ability to write MSSQL queries from scratch Interest in modifying existing code and writing new code in C#.NET in Visual Studio Experience with Python especially Numpy, Pandas, and Jupyter Notebook Familiarity with or interest in Microsoft PowerPoint primary interop assembly Comfort with data manipulation in Excel, including functions such as VLOOKUP, HLOOKUP, Indirect, Index, Match, FVSchedule, SUMIFS, COUNTIFS, and more Experience working with GitLab or similar source code management systems Ability to build knowledge in and work with different programming languages. For example: Writing T-SQL queries in SQL Server Management Studio to gather, organize, and manipulate data. Transforming SQL output into Power Point deliverables using C#.NET in Visual Studio. T-SQL skills; C#.NET, Python, or similar OOP language experience Understanding of relational database concepts Strong written and oral communication skills Aptitude for transforming quantitative data into compelling visualizations Ability to break down issues into component parts and to learn technical subject matter quickly Intellectually curious and dedicated to mastering complex concepts while willing to turn to others for assistance when necessary Proficiency in distilling massive amounts of data Commitment to documenting and commenting all codes Collaborative, team-oriented mindset Effective at managing time and meeting deadlines while working independently Persistent when faced with debugging code and resolving issues Fluent in English. #ASSOCIATE #ICS What You Can Expect From Us At ISS STOXX, our people are our driving force. We are committed to building a culture that values diverse skills, perspectives, and experiences. We hire the best talent in our industry and empower them with the resources, support, and opportunities to grow—professionally and personally. Together, we foster an environment that fuels creativity, drives innovation, and shapes our future success. Let’s empower, collaborate, and inspire. Let’s be #BrilliantTogether. About ISS STOXX ISS STOXX GmbH is a leading provider of research and technology solutions for the financial market. Established in 1985, we offer top-notch benchmark and custom indices globally, helping clients identify investment opportunities and manage portfolio risks. Our services cover corporate governance, sustainability, cyber risk, and fund intelligence. Majority-owned by Deutsche Börse Group, ISS STOXX has over 3,400 professionals in 33 locations worldwide, serving around 6,400 clients, including institutional investors and companies focused on ESG, cyber, and governance risk. Clients trust our expertise to make informed decisions for their stakeholders' benefit. ISS Corporate Solutions, Inc. (“ISS-Corporate”) is a leading provider of cutting-edge SaaS and high-touch advisory services to companies, globally. Companies turn to ISS-Corporate for expertise in designing and managing governance, compensation, sustainability, and cyber risk programs that align with company goals, reduce risk, and manage the needs of a diverse shareholder base by delivering data, tools, and advisory services. ISS-Corporate’s global client base extends across North America, Europe, and Asia, as well as other established and emerging markets worldwide. Visit our website: https://www.issgovernance.com View additional open roles: https://www.issgovernance.com/join-the-iss-team/ Institutional Shareholder Services (“ISS”) is committed to fostering, cultivating, and preserving a culture of diversity and inclusion. It is our policy to prohibit discrimination or harassment against any applicant or employee on the basis of race, color, ethnicity, creed, religion, sex, age, height, weight, citizenship status, national origin, social origin, sexual orientation, gender identity or gender expression, pregnancy status, marital status, familial status, mental or physical disability, veteran status, military service or status, genetic information, or any other characteristic protected by law (referred to as “protected status”). All activities including, but not limited to, recruiting and hiring, recruitment advertising, promotions, performance appraisals, training, job assignments, compensation, demotions, transfers, terminations (including layoffs), benefits, and other terms, conditions, and privileges of employment, are and will be administered on a non-discriminatory basis, consistent with all applicable federal, state, and local requirements. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 09 S&P Global Mobility The Role: Data Engineer The Team: We are the Research and Modeling team, driving innovation by building robust models and tools to support the Vehicle & Powertrain Forecast team. Our work includes all aspects of development of, and ongoing support for, our business line’ data flows, analyst modelling solutions and forecasts, new apps, new client-facing products, and many other work areas besides. We value ownership, adaptability, and a passion for learning, while fostering an environment where diverse perspectives and mentorship fuel continuous growth. The Impact: We are seeking a motivated and talented Data Engineer to be a key player in building a robust data infrastructure and flows that supports our advanced forecasting models. Your initial focus will be to create a robust data factory to ensure smooth collection and refresh of actual data, a critical component that feeds our forecast. Additionally, you will be assisting in developing mathematical models and supporting the work of ML engineers and data scientists. Your work will significantly impact our ability to deliver timely and insightful forecasts to our clients. What’s In It For You Opportunity to build foundational data infrastructure that directly impacts advanced forecasting models and client delivery. Gain exposure to and support the development of sophisticated mathematical models, Machine Learning, and Data Science applications. Contribute significantly to delivering timely and insightful forecasts, influencing client decisions in the automotive sector. Work in a collaborative environment that fosters continuous learning, mentorship, and professional growth in data engineering and related analytical fields. Responsibilities Data Pipeline Development: Design, build, and maintain scalable and reliable data pipelines for efficient data ingestion, processing, and storage, primarily focusing on creating a data factory for our core forecasting data. Data Quality and Integrity: Implement robust data quality checks and validation processes to ensure the accuracy and consistency of data used in our forecasting models. Mathematical Model Support: Collaborate with other data engineers to develop and refine mathematical logics and models that underpin our forecasting methodologies. ML and Data Science Support: Provide data support to our Machine Learning Engineers and Data Scientists. Collaboration and Communication: Work closely with analysts, developers, and other stakeholders to understand data requirements and deliver effective solutions. Innovation and Improvement: Continuously explore and evaluate new technologies and methodologies to enhance our data infrastructure and forecasting capabilities. What We’re Looking For Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Minimum of 3 years of experience in data engineering, with a proven track record of building and maintaining data pipelines. Strong proficiency in SQL and experience with relational and non-relational databases. Strong Python programming skills, with experience in data manipulation and processing libraries (e.g., Pandas, NumPy). Experience with mathematical modelling and supporting ML and data science teams. Experience with cloud platforms (e.g., AWS, Azure, GCP) and cloud-based data services. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Experience in the automotive sector is a plus. About Company Statement S&P Global delivers essential intelligence that powers decision making. We provide the world’s leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, you’ll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand today’s market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314772 Posted On: 2025-05-13 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Primary skills: Python > Django, Flask, Pandas, Numpy, Pyramid Roles & Responsibilities. A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Writing efficient, reusable, testable, and scalable code, Integration of user-oriented elements into different applications, data storage solutions, Keeping abreast with the latest technology and trends If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of design principles and fundamentals of architecture Basic understanding of project domain Writing scalable code using Python programming language. Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of agile methodologies Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

What you’ll do: Utilize advanced mathematical, statistical, and analytical expertise to research, collect, analyze, and interpret large datasets from internal and external sources to provide insight and develop data driven solutions across the company Build and test predictive models including but not limited to credit risk, fraud, response, and offer acceptance propensity Responsible for the development, testing, validation, tracking, and performance enhancement of statistical models and other BI reporting tools leading to new innovative origination strategies within marketing, sales, finance, and underwriting Leverage advanced analytics to develop innovative portfolio surveillance solutions to track and forecast loan losses, that influence key business decisions related to pricing optimization, credit policy and overall profitability strategy Use decision science methodologies and advanced data visualization techniques to implement creative automation solutions within the organization Initiate and lead analysis to bring actionable insights to all areas of the business including marketing, sales, collections, and credit decisioning Develop and refine unit economics models to enable marketing and credit decisions What you’ll need: 5 to 8 years of experience in data science or a related role with a focus on Python programming and ML models. Proficient in Python programming and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, Keras, PyTorch. Strong experience with ETL and Data Warehouse. Good experience with IOT is required. Strong understanding of machine learning algorithms, deep learning techniques, and natural language processing methodologies. Strong Python/Pyspark knowledge enabling data preprocessing and historical feedback loop for context loading for LLM. Familiarity with SQL databases (MySQL/ SQL Server) and vector databases like (Qdrant, Faiss). Proven track record of delivering successful AI projects that demonstrate measurable impact on business outcomes. Strong analytical skills with the ability to interpret complex data sets. Master’s or PhD in Computer Science, Data Science or related field preferred. Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description : We are seeking a dynamic and results-driven AI Engineer with 2 years of hands-on experience in developing and deploying AI/ML models. The ideal candidate will have a B.Tech/BE in Artificial Intelligence/Machine Learning, strong programming skills in Python, and a deep understanding of AI algorithms, data structures, and real-world applications. Key Responsibilities: Design, develop, and implement AI/ML models and algorithms for various use cases. Work on end-to-end model lifecycle including data preprocessing, training, tuning, validation, and deployment. Collaborate with data engineers, product managers, and software developers to integrate AI solutions into production. Apply machine learning techniques to solve classification, prediction, and optimization problems. Utilize Python and relevant ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, NumPy) for model development. Monitor and improve model performance using evaluation metrics and feedback loops. Qualifications: Bachelor’s degree (B.Tech/BE) in Artificial Intelligence, Machine Learning, Computer Science, or a related field. Minimum 2 years of professional experience in AI/ML model development.Proficiency in Python and its AI/ML ecosystem (TensorFlow, PyTorch, scikit-learn, etc.). Good knowledge of supervised and unsupervised learning, deep learning, and NLP techniques. Strong analytical thinking and problem-solving abilities.. If you're an AI Engineer who thrives on transforming data into actionable intelligence, this is your chance to make an impact Please apply now. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure complianceproviding clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated our financial client focused on your success in today's fast-evolving digital world. Position Title: Senior Python Developer Experience : 8+ Years Location: Chennai / Hyderabad/Bengaluru/Indore (Hybrid) Work Timings: 11:30AM to 8:30PM IST Job Description (Job Summary/Roles & Responsibilities): We are seeking a highly skilled and experienced Python Developer to join our team. In this role, you will be responsible for designing, developing, and maintaining high-quality Python applications. You will work closely with cross-functional teams to deliver innovative solutions that meet business requirements. Responsibilities: * Design, develop, and maintain robust and scalable Python applications. * Write clean, well-documented, and efficient code. * Participate in all phases of the software development lifecycle, including requirements gathering, design, development, testing, and deployment. * Troubleshoot and debug complex issues. * Collaborate with cross-functional teams (e.g., product managers, designers, QA engineers) to deliver high-quality software. * Stay up-to-date with the latest technologies and industry best practices. * Contribute to the improvement of our development processes and tools. * Mentor junior developers and share knowledge within the team. Requirements: * Bachelor's degree in Computer Science or a related field. * 8+ years of professional experience in Python development. * Strong understanding of object-oriented programming principles. * Experience with common Python libraries and frameworks (e.g., Django, Flask, NumPy, Pandas). * Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). * Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus. * Experience with Agile development methodologies (e.g., Scrum, Kanban). * Excellent communication and collaboration skills. * Strong problem-solving and analytical skills. Education & Certifications: B.Tech/M.Tech/MCA

Posted 1 week ago

Apply

2.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Sr. Product Engineer - AI/ML : We are seeking a highly skilled and experienced Sr. Product Engineer - AI/ML with 2+ years experience to join our dynamic team. As a Sr. Product Engineer, you will be responsible for designing, developing, and implementing AI/ML solutions that will drive the success of our products. This is a challenging and rewarding role that requires a strong understanding of AI/ML technologies, as well as excellent problem-solving and communication skills. Duties and Responsibilities Collaborate with cross-functional teams to define product requirements and develop AI/ML solutions. Design and implement machine learning algorithms and models to solve complex business problems. Conduct data analysis and visualization to identify patterns and trends in large datasets. Build and maintain scalable data pipelines for data ingestion, processing, and storage. Research and evaluate new AI/ML technologies and tools to improve product performance and efficiency. Work closely with product managers to prioritize and plan product roadmap based on market trends and customer needs. Collaborate with software engineers to integrate AI/ML models into production systems. Provide technical guidance and mentorship to junior team members. Stay updated with the latest advancements and best practices in AI/ML and apply them to improve product offerings. Ensure compliance with data privacy and security regulations in all AI/ML solutions. Skills and Qualifications Strong understanding of AI/ML concepts and algorithms Proficient in programming languages such as Python, Java, or C++ Experience with machine learning frameworks such as TensorFlow, PyTorch, or Keras Familiarity with data analysis and visualization tools like Pandas, NumPy, and Matplotlib Knowledge of cloud computing platforms like AWS, Azure, or Google Cloud Experience with natural language processing (NLP) and computer vision Ability to design and implement AI/ML models from scratch Strong problem-solving and critical thinking skills Excellent communication and collaboration abilities Experience with agile development methodologies Ability to work independently and in a team environment Knowledge of software development lifecycle (SDLC) Experience with version control systems like Git or SVN Understanding of software testing and debugging processes Ability to adapt to new technologies and learn quickly Notice Period- IMMEDIATE TO 15 DAYS Locations can be Kochi, Trivandrum or Kozhikode. Show more Show less

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Role: Data Scientist – Pune (Hybrid) Location: Pune, Maharashtra Experience: 1+ Years Job Type: Full-Time Salary: ₹9–10 LPA Mode: Hybrid Position Overview: We’re looking for a motivated Data Scientist with 1+ years of hands-on experience in Machine Learning, NLP, Generative AI, and RAG . You’ll be part of our AI Center of Excellence, building cutting-edge solutions that power innovation and create real business impact. Ideal for candidates who are passionate about AI-enabled products and solving complex data problems. Key Responsibilities: Develop, test, and deploy ML models for business and telecom use cases Perform data preprocessing, feature engineering, and model evaluation Optimize ML/DL models for performance and scalability Work on NLP tasks : entity recognition, text classification, and large language models (e.g., GPT, LLaMA) Design and integrate RAG-based solutions into real-world applications Collaborate with software engineers, product managers, and domain experts Document processes and clearly communicate with stakeholders Required Skills: Strong foundation in statistics , probability , and ML/DL concepts Proficient in Python , SQL , and frameworks like PyTorch, Scikit-learn, NumPy Hands-on with Generative AI tools : LangChain, LlamaIndex, etc. Understanding of MLOps and experience with batch/stream data processing Strong problem-solving skills and ability to manage multiple projects Effective communicator and collaborative team player Job Types: Full-time, Permanent Pay: ₹900,000.00 - ₹1,000,000.00 per year Schedule: Day shift Monday to Friday Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Data science: 2 years (Preferred) Python: 1 year (Required) Machine learning: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Python (ETL) - Mumbai. Candidate Specifications: Education: Minimum Qualification- Graduate / Post-graduate in any specialization Min 5+ years of relevant experience Required Qualifications: Oracle /T-SQL & PLSQL - Very Good understanding on Relational databases and should be able to use SQL efficiently. Extensive hands-on Python and data related libraries (Numpy and pandas) Data mining, analysis and basic understanding on ETL process DWH Reporting Tools such as TABLEAU and POWERBI ELK Stack ML Libraries Statistical/Machine Learning Modelling Skills Required RolePython (ETL) Industry TypeIT/ Computers - Software Functional AreaConsulting Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills PYTHON ELK STACKS NUMPY PANDAS T-SQL PL-SQL Other Information Job CodeGO/JC/21498/2025 Recruiter NameSheena Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

35 Lacs

Mumbai

Work from Office

Naukri logo

Job Insights: 1. Develop and maintain AI models on time series & financial date for predictive modelling, including data collection, analysis, feature engineering, model development, evaluation, backtesting and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues, Quant and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Qualifications: Bachelors or Masters degree in a numeric subject with understanding of economics and markets (eg.: Economics with a speciality in Econometrics, Finance, Computer Science, Applied Maths, Engineering, Physics) 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Knowledge of Monte Carlo Simulations, Bayesian modelling & Causal Inference. 4. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. 5. Proven track record of building AI models on time-series & financial data. 6. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, PyMC, statsmodels). 7. Ability to write clear and concise code in python. 8. Intellectually curious and willing to learn challenging concepts daily.

Posted 1 week ago

Apply

1.0 - 3.0 years

35 Lacs

Mumbai

Work from Office

Naukri logo

Job Insights: 1. Develop and maintain AI models from inception to deployment, including data collection, analysis, feature engineering, model development, evaluation, and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Qualifications: 1. Bachelors / Master / PhD degree in Computer Science / Data Science / Mathematics / Statistics / relevant STEM field. 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. 4. Proven track record of building AI model from scratch or finetuning on large models for Tabular or/and Textual data. 5. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, transformers, langchain). 6. Ability to write clear and concise code in python. 7. Intellectually curious and willing to learn challenging concepts daily. 8. Knowledge of current Machine Learning/Artificial Intelligence literature. Notice period open

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Overview: CashKaro is India’s #1 cashback platform, trusted by over 25 million users! We drive more sales for Amazon, Flipkart, Myntra, and Ajio than any other paid channel, including Google and Meta. Backed by legendary investor Ratan Tata and a recent $16 million boost from Affle, we’re on a rocket ship journey—already surpassing ₹300 crore in revenue and racing towards ₹500 crore. EarnKaro, our influencer referral platform, is trusted by over 500,000 influencers and sends more traffic to leading online retailers than any other platform. Whether it’s micro-influencers or top-tier creators, they choose EarnKaro to monetize their networks. BankKaro, our latest venture, is rapidly becoming India’s go-to FinTech aggregator. Join our dynamic team and help shape the future of online shopping, influencer marketing, and financial technology in India! Role Overview: As a Product Analyst, you will play a pivotal role in enabling data-driven product decisions. You will be responsible for deep-diving into product usage data, building dashboards and reports, optimizing complex queries, and driving feature-level insights that directly influence user engagement, retention, and experience. Key Responsibilities: Feature Usage & Adoption Analysis - Analyze event data to understand feature usage, retention trends, and product interaction patterns across web and app. User Journey & Funnel Analysis - Build funnel views and dashboards to identify drop-offs, friction points, and opportunities for UX or product improvements. Product Usage & Retention Analytics - Analyze user behavior, cohort trends, and retention using Redshift and BigQuery datasets. Partner with Product Managers to design and track core product KPIs. SQL Development & Optimization - Write and optimize complex SQL queries across Redshift and BigQuery. Build and maintain views, stored procedures, and data models for scalable analytics. Dashboarding & BI Reporting - Create and maintain high-quality Power BI dashboards to track DAU/WAU/MAU, feature adoption, engagement %, and drop-off trends. Light Data Engineering - Use Python (Pandas/Numpy) for data cleaning, transformation, and quick exploratory analysis. Business Insight Generation - Translate business questions into structured analyses and insights that inform product and business strategy. Must-Have Skills: Expert-level SQL across Redshift and BigQuery, including performance tuning, window functions, and procedure creation. Strong skills in Power BI (or Tableau) with ability to build actionable, intuitive dashboards. Working knowledge of Python (Pandas) for quick data manipulation and ad-hoc analytics. Deep understanding of product metrics – DAU, retention, feature usage, funnel performance. Strong business acumen – ability to connect data with user behavior and product outcomes. Clear communication and storytelling skills to present data insights to cross-functional teams. Good to Have: Experience with mobile product analytics (Android & iOS). Understanding of funnel, cohort, engagement, and retention metrics. Familiarity with A/B testing tools and frameworks. Experience working with Redshift, Big Query, or cloud-based data pipelines. Certifications in Google Analytics, Firebase, or other analytics platforms. Why Join Us? High Ownership: Drive key metrics for products used by millions. Collaborative Culture: Work closely with founders, product, and tech teams. Competitive Package: Best-in-class compensation, ESOPs, and perks. Great Environment: Hybrid work, medical insurance, lunches, and learning budgets. Ensuring a Diverse and Inclusive workplace where we learn from each other is core to CK's value. CashKaro.com and EarnKaro.com are Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. CashKaro.com and EarnKaro.com will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Pouring Pounds India Pvt. Ltd. will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Visit our Career Page at - https://cashkaro.com/page/careers Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Calfus Calfus is a Silicon Valley headquartered software engineering and platforms company. The name Calfus finds its roots and ethos in the Olympic motto “Citius, Altius, Fortius – Communiter". Calfus seeks to inspire our team to rise faster, higher, stronger, and work together to build software at speed and scale. Our core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes. We stand for #Equity and #Diversity in our ecosystem and society at large. Connect with us at #Calfus and be a part of our extraordinary journey! Position Overview: As a Data Engineer – BI Analytics & DWH , you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower our organization to make data-driven decisions. You will leverage your expertise in Power BI, Tableau, and ETL processes to create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities:  BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau.  Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses.  Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives.  Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors.  Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization.  Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs.  Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability.  Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance.  Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement.  Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications :  Bachelor’s degree in computer science, Information Systems, Data Science, or a related field.  6-15+ years of experience in BI architecture and development, with a strong focus on Power BI and Tableau.  Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management.  Exploratory data analysis with Python  Familiarity with the CRISP-DM model  Ability to work with di􀆯erent data models like  Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB  Experience with visualization tools such as Power BI, Quick sight and Plotly and or Dash  Strong programming foundation with Python with versatility to handle as : Data Manipulation and Analysis: using Pandas, NumPy & PySpark Data serialization & formats like JSON, CSV and Parquet & Pickle Database interaction to query cloud-based data warehouses Data Pipeline and ETL tools like Airflow for orchestrating workflows and, managing ETL pipelines: Scripting and automation . Cloud services & tools such as S3, AWS Lambda to manage cloud infrastructure. Azure SDK is a plus  Code quality and management using version control and collaboration in data engineering projects  Ability to interact with REST API’s and perform web scraping tasks is a plus Calfus Inc. is an Equal Opportunity Employer. That means we do not discriminate against any applicant for employment, or any employee because of age, colour, sex, disability, national origin, race, religion, or veteran status. All employment is decided based on qualifications, merit, and business need. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Hiring Now: Expert Teacher for MATLAB, Simulink, Java & Python 📅 Job Type : Part-Time / Full-Time / Freelance (Flexible Options) 🧠 Subject Expertise Required : We are looking for a passionate and experienced educator who can teach the following: MATLAB : Advanced mathematical modeling, simulations, and engineering applications Simulink : Block diagram environment, dynamic systems modeling Java : OOP concepts, GUI programming, application development Python : Core Python, Data Structures, Libraries (NumPy, Pandas, Matplotlib, etc.) 🎓 Eligibility & Qualifications : Bachelor’s/Master’s/Ph.D. in Computer Science, Engineering, or related fields Strong command over at least two of the mentioned subjects (all four is a plus) Prior teaching experience (online or offline) will be an added advantage Excellent communication skills and passion for teaching 🧾 Roles & Responsibilities : Deliver clear, concept-based lessons to students Design assignments, quizzes, and project tasks Conduct doubt-clearing and revision sessions Guide students in project work and real-world problem-solving Track student progress and provide feedback 💸 Salary & Perks : Attractive Pay Package (based on subject and experience) Flexible working hours Opportunity to work with a reputed educational brand Chance to impact the lives of aspiring engineers and programmers 📨 How to Apply : Submit your CV via WhatsApp at 8981679014 📞 For queries or more information, call 8981679014 🔔 Limited vacancies. Apply soon to become a part of our dynamic teaching team! Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: At Coursera, our Data Science team is helping to build the future of education through data-driven decision making and data-powered products. We drive product and business strategy through measurement, experimentation, and causal inference to help Coursera deliver effective content discovery and personalized learning at scale. We believe the next generation of teaching and learning should be personalized, accessible, and efficient. With our scale, data, technology, and talent, Coursera and its Data Science team are positioned to make that vision a reality. We are seeking a highly skilled and collaborative Senior Data Scientist to join our Data Science team. In this role, you will report directly to the Director of Data Science and play a pivotal role in shaping our product strategy through data-driven insights and analytics. You will leverage your expertise in user behavior tracking, instrumentation, A/B testing, and advanced analytics techniques to gain a deep understanding of how users interact with our platform. Your insights will directly inform product development, enhance user experience, and drive engagement across various segments. Our ideal candidate possesses strong analytical skills, business acumen, and the ability to translate analysis into actionable recommendations that drive product improvements and user engagement. You should have excellent written and verbal communication skills. Collaborating closely with cross-functional teams—including product managers, designers, and engineers—you will ensure that data informs every aspect of product decision-making. Responsibilities: Design and implement instrumentation strategies for accurate tracking of user interactions and data collection. Develop and maintain data pipelines to ensure seamless data flow and accessibility for analysis. Analyze user behavior to provide actionable insights that inform product enhancements and drive user engagement. Conduct A/B testing and experimentation to evaluate the impact of product features and optimize user experience. Perform advanced analytics to uncover trends and patterns in user data, guiding product development decisions. Collaborate with product managers, designers, and engineers to define key performance indicators (KPIs) and assess the impact of product changes. Analyze user feedback and survey data to gain insights into user satisfaction and identify areas for improvement. Create interactive dashboards and reports to visualize data and communicate findings effectively to stakeholders. Leverage statistical analysis and predictive modeling to inform product roadmap and strategic decisions. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related technical field. 7+ years of experience using data to advise product or business teams, with a focus on product analytics. Strong SQL skills and advanced proficiency in statistical programming languages such as Python, along with experience using data manipulation libraries (e.g., Pandas, NumPy). Knowledge of data pipeline development and best practices in data management. Strong applied statistics skills, including experience with statistical inference techniques, predictive modeling and A/B testing methodologies. Intermediate proficiency in data visualization tools (e.g., Tableau, Power BI, Looker) and a willingness to learn new tools as needed. Excellent business intuition and project management abilities. Effective communication and presentation skills, with experience presenting to diverse teams and stakeholders, from individual contributors to executives. Preferred Qualifications: Familiarity with the educational technology sector, specifically with platforms like Coursera Experience with Airflow, Databricks and/or Looker Experience with Amplitude Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

CryptoChakra.com is a rapidly evolving cryptocurrency analytics and education platform dedicated to empowering users with data-driven insights and next-generation tools. Currently in its development phase, we are building scalable AI infrastructure to support advanced analytics, real-time market predictions, and robust educational resources. By integrating artificial intelligence with blockchain technology, our platform aims to transform raw data into actionable intelligence for crypto enthusiasts and investors worldwide. Our mission is to deliver secure, reliable, and scalable data solutions that fuel AI-driven innovations and foster financial literacy. As we expand, we remain committed to technical excellence, transparency, and democratizing access to cutting-edge crypto analytics. Role Description Position: Data Analyst (AI Infrastructure) – Remote Employment Type: Internship or Entry-Level (Paid or unpaid, depending on suitability and project requirements) Key Responsibilities: AI Data Pipeline Management: Assist in building, maintaining, and optimizing data pipelines that feed AI and machine learning models with clean, structured, and accurate data. Data Quality Assurance: Monitor, validate, and preprocess large datasets to ensure data integrity and reliability for AI training and inference. Infrastructure Collaboration: Work closely with data engineers and AI teams to translate computational needs into scalable infrastructure solutions, leveraging cloud platforms (AWS, Azure, Google Cloud) and automation tools. Data Transformation & Integration: Support the development and maintenance of data warehouses, data lakes, and integration frameworks to streamline data flow for analytics and AI workloads. Performance Monitoring: Track and report on the performance of data infrastructure, identifying bottlenecks and opportunities for optimization. Process Improvement: Document workflows, automate repetitive tasks, and implement best practices for data governance and observability. Learning Outcomes: Hands-on experience with AI/ML data workflows, cloud infrastructure, and big data tools. Exposure to automated data validation, ETL processes, and data observability platforms. Mentorship from experts in AI infrastructure, data engineering, and blockchain analytics. Qualifications Core Requirements: Technical Proficiency: Strong foundation in Python, SQL, and data manipulation libraries (Pandas, NumPy). Data Management Skills: Experience with data cleaning, preprocessing, and quality assurance for AI/ML applications. Cloud & Automation: Familiarity with cloud platforms (AWS, Azure, Google Cloud) and automation tools (Terraform, Ansible, Docker, Kubernetes) is a plus. Analytical Mindset: Ability to troubleshoot data anomalies, optimize data flows, and collaborate with cross-functional teams. Remote Work Ethic: Self-motivated with strong time management and communication skills for remote collaboration. Academic Background: Pursuing or holding a degree in Data Science, Computer Science, Engineering, or a related field. Preferred Assets: Knowledge of data validation and observability tools (Great Expectations, Monte Carlo, Apache Griffin). Experience with version control (Git) and agile development methodologies. Interest in blockchain technology, cryptocurrency markets, or decentralized applications. Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Job Information Job Opening ID ZR_254_JOB Date Opened 06/04/2025 Job Type Full time Industry IT Services City Mohali State/Province Punjab Country India Zip/Postal Code 160055 About Us DigiMantra is a global IT service provider, offering a comprehensive suite of solutions including Digital Transformation, Cloud Computing, Cybersecurity, AI, and Data Analytics. With a strong global presence, we have our CoE's is US, UAE and India. In India we have our development centres in Hyderabad, Mohali and Ludhiana, enabling us to help businesses succeed in the digital age. Our inventive and bespoke solutions fuel development and success, allowing customers to stay ahead of the competition. As a trusted partner with knowledge and adaptability, DigiMantra delivers results that influence the future of business in a fast-changing world. Job Description Job Summary We are seeking a motivated and skilled AI Engineer with 1+ years of experience to join our growing team. In this role, you will contribute to the development, optimization, and deployment of AI models to solve key business challenges. You will work closely with cross-functional teams to create cutting-edge AI solutions and have the opportunity to enhance your skills in a dynamic and innovative environment. Responsibilities Develop and deploy AI models to solve real-world problems. Clean and pre process data for machine learning. Optimize algorithms for accuracy and performance. Collaborate with cross-functional teams to integrate AI into products. Test and validate AI models for production readiness. Qualifications 1+ years of experience in AI, ML, or data science. Proficiency in Python and ML frameworks (TensorFlow, PyTorch). Strong data manipulation skills (Pandas, NumPy). Bachelor's degree in Computer Science or related field. Preferred Experience with cloud platforms (AWS, GCP, Azure). Knowledge of advanced ML techniques (NLP, Computer Vision).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30243101 Job Description AI Data Scientist Locations: Pune, India Buildings are getting smarter with connected technologies. With more connectivity, there is access to more data from sensors installed in buildings. Johnson Controls is leading the way in providing AI enabled enterprise solutions that contribute to optimized energy utilization, auto- generation of building insights and enable predictive maintenance for installed devices. Our Data Strategy & Intelligence team is looking for a Data Scientist to join our growing team. You will play a critical role in developing and deploying machine learning/Generative AI and time series analysis models in production. The Role To be successful in this role, the Data Scientist should have a deep knowledge of machine learning concepts, Large Language Models (LLM) including their training , optimization and deployment, time series models as well as experience in developing and deploying ML/Generative AI/ time series models in production. What you will do As an AI Data Scientist at Johnson Controls, you will help develop and maintain the AI algorithms and capabilities within our digital products. These applications will use data from commercial buildings, apply machine learning, GenAI or other advanced algorithms to provide value in the following ways: Optimize building energy consumption, occupancy, reduce CO2 emissions, enhance users’ comfort, etc. Generate actionable insights to improve building operations Translate data into direct recommendations for various stakeholders Your efforts will ensure that our AI solutions deliver robust and repeatable outcomes through well-designed algorithms and well-written software. To be successful in this role, the AI Data Scientist should be comfortable applying machine-learning concepts to practical applications while handling the inherent challenges of real-world datasets. How you will do it Contribute as a member of the AI team with assigned tasks Collaborate with product managers to design new AI capabilities Explore and analyze available datasets for potential applications Write Python code to develop ML/Generative AI/time series prediction solutions that address complex business requirements Research and implement state-of-the-art techniques in Generative AI solutions Pre-train and finetune ML over CPU/GPU clusters while optimizing for trade-offs Follow code-quality standards and best practices in software development Develop and maintain test cases to validate algorithm correctness Assess failures to identify causes and plan fixes for bugs Communicate key results to stakeholders Leverage JIRA to plan work and track issues What we look for Bachelor's / Master’s degree in Computer Science, Statistics, Mathematics, or related field. 5+ years of experience of developing and deploying ML Models with a proven record of delivering production ready ML models. Proficiency with Python and standard ML libraries, e.g., PyTorch, Tensorflow, Keras, NumPy, Pandas, scikit-learn, Matplotlib, Transformers. Strong understanding of ML algorithms and techniques, e.g., Regression, Classification, Clustering, Deep Learning, NLP / Transformer models, LLMs and Time Series prediction models. Experience in developing SOA LLM frameworks and models (Azure OpenAI, Meta Llama, etc), advanced prompt engineering techniques, LLMs fine-tuning/training. Experience in working with cloud (AWS / GCP / Azure) based ML/GenAI model development / deployment. Excellent verbal and written communication skills. Preferred Prior Domain experience in smart buildings and building operations optimization Experience in working with Microsoft Azure Cloud.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

- 3+ years of building machine learning models for business application experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning Amazon Shipping Team: Basic Qualifications: Btech/Mtech in Computer Science, Machine Learning, Operations Research, Statistics, or related technical field applying ML techniques to solve complex business problems programming skills in Python, R, or similar languages Experience with modern ML frameworks (PyTorch, TensorFlow, etc.) About the Role: At Amazon Shipping, we're revolutionizing package delivery through machine learning. Our network handles packages daily with predictive monitoring, proactive failure detection, and intelligent redundancy - all while optimizing costs for our customers. Key Responsibilities: Design and develop ML models for: Transportation cost auditing and discrepancy detection Package-level shipping cost prediction First Mile optimization through warehouse pickup forecasting Delivery delay prediction using network signals and external factors Collaborate with cross-functional teams to implement ML solutions at scale Author scientific papers for ML conferences Mentor team members in ML best practices Provide ML consultation across organizations Preferred Qualifications: PhD in Experience with large-scale distributed systems Publication record in top-tier ML conferences Expertise in time series forecasting and anomaly detection Background in transportation/logistics optimization communication skills with technical and non-technical stakeholders Our Team: You'll join a diverse team of Applied Scientists, Software Engineers, and Business Intelligence Engineers working on edge ML solutions. We're passionate about solving complex problems and delivering customer value through innovation. Key job responsibilities Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies