Home
Jobs

2323 Numpy Jobs - Page 37

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Software Engineer II Are you interested in bringing your technical expertise to projects? Are you a detail-oriented paralegal with a 'can do' attitude? About Our Team LexisNexis Legal & Professional, which serves customers in more than 150 countries with 11,300 employees worldwide, is part of RELX, a global provider of information-based analytics and decision tools for professional and business customers. About The Role This position performs moderate research, design, and software development assignments within a specific software functional area or product line. Responsibilities Write and review portions of detailed specifications for the development of system components of moderate complexity. Complete simple bug fixes. Work closely with other development team members to understand product requirements and translate them into software designs. Operate in various development environments (Agile, Waterfall, etc.) while collaborating with key stakeholders. Resolve technical issues as necessary. Keep abreast of new technological developments. All other duties as assigned. Requirements 2+ years of experience as a Python Developer Working knowledge of API integration In-depth understanding of the Python software development stacks, ecosystems, frameworks, and tools such as Numpy, Scipy, Pandas, FAST API, Django, etc. Experience with popular Python frameworks such as Django, Flask, or Fast API. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Knowledge of AWS , DOCKER, LAMBDA Functions Strong communication and collaboration skills including the ability to co-operate with multiple parties and align on priorities Knowledge Agile Methodology Ability to work with simple data models. Familiarity of industry best practices — code coverage. Work in a way that works for you We promote a healthy work/life balance across the organisation. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working for you Benefits We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai) About The Business LexisNexis Legal & Professional® provides legal, regulatory, and business information and analytics that help customers increase their productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® and Nexis® services. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

AI Data Scientist Locations: Pune, India Buildings are getting smarter with connected technologies. With more connectivity, there is access to more data from sensors installed in buildings. Johnson Controls is leading the way in providing AI enabled enterprise solutions that contribute to optimized energy utilization, auto- generation of building insights and enable predictive maintenance for installed devices. Our Data Strategy & Intelligence team is looking for a Data Scientist to join our growing team. You will play a critical role in developing and deploying machine learning/Generative AI and time series analysis models in production. The Role To be successful in this role, the Data Scientist should have a deep knowledge of machine learning concepts, Large Language Models (LLM) including their training , optimization and deployment, time series models as well as experience in developing and deploying ML/Generative AI/ time series models in production. What you will do As An AI Data Scientist At Johnson Controls, You Will Help Develop And Maintain The AI Algorithms And Capabilities Within Our Digital Products. These Applications Will Use Data From Commercial Buildings, Apply Machine Learning, GenAI Or Other Advanced Algorithms To Provide Value In The Following Ways Optimize building energy consumption, occupancy, reduce CO2 emissions, enhance users’ comfort, etc. Generate actionable insights to improve building operations Translate data into direct recommendations for various stakeholders Your efforts will ensure that our AI solutions deliver robust and repeatable outcomes through well-designed algorithms and well-written software. To be successful in this role, the AI Data Scientist should be comfortable applying machine-learning concepts to practical applications while handling the inherent challenges of real-world datasets. How you will do it Contribute as a member of the AI team with assigned tasks Collaborate with product managers to design new AI capabilities Explore and analyze available datasets for potential applications Write Python code to develop ML/Generative AI/time series prediction solutions that address complex business requirements Research and implement state-of-the-art techniques in Generative AI solutions Pre-train and finetune ML over CPU/GPU clusters while optimizing for trade-offs Follow code-quality standards and best practices in software development Develop and maintain test cases to validate algorithm correctness Assess failures to identify causes and plan fixes for bugs Communicate key results to stakeholders Leverage JIRA to plan work and track issues What we look for Bachelor's / Master’s degree in Computer Science, Statistics, Mathematics, or related field. 5+ years of experience of developing and deploying ML Models with a proven record of delivering production ready ML models. Proficiency with Python and standard ML libraries, e.g., PyTorch, Tensorflow, Keras, NumPy, Pandas, scikit-learn, Matplotlib, Transformers. Strong understanding of ML algorithms and techniques, e.g., Regression, Classification, Clustering, Deep Learning, NLP / Transformer models, LLMs and Time Series prediction models. Experience in developing SOA LLM frameworks and models (Azure OpenAI, Meta Llama, etc), advanced prompt engineering techniques, LLMs fine-tuning/training. Experience in working with cloud (AWS / GCP / Azure) based ML/GenAI model development / deployment. Excellent verbal and written communication skills. Preferred Prior Domain experience in smart buildings and building operations optimization Experience in working with Microsoft Azure Cloud. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description The IT Technical Analyst - Senior provides technical expertise in applications or infrastructure, delivering detailed solution designs that translate business requirements into scalable IT solutions. This role supports the design, development, implementation, and maintenance of IT systems and infrastructure in alignment with Cummins’ architectural standards and security practices. The position involves technical analysis, design review, solution configuration, and cross-functional collaboration to deliver optimized and reliable solutions. Key Responsibilities Develop and manage technical specifications to guide application, infrastructure, or solution development. Analyze and evaluate solution options, including commercial-off-the-shelf products vs custom-built solutions. Deliver and document detailed solution designs that meet business, performance, security, scalability, and maintainability requirements. Collaborate with infrastructure teams and service providers to ensure solution delivery meets specifications and quality standards. Participate in design and code reviews to ensure adherence to standards and requirements. Drive reuse of components and efficiency in the build and deployment processes using automation and CI/CD pipelines. Support the creation of runbooks and documentation for end-user support and knowledge transfer. Assist in testing strategy and execution of test plans for solution validation. Provide advanced (Level 3) support for critical technical issues and participate in system remediation. Analyze existing systems for improvements, optimizations, or compliance needs. Responsibilities Experience 8–10 years of relevant technical experience in IT solution development and infrastructure design. Intermediate level of experience with demonstrated ability to design, implement, and support enterprise-level solutions. Skills And Experience Strong proficiency in: SQL, DAX, M Query, Python SSIS, SSAS, Azure DataLake, Databricks Power BI service development and optimization CI/CD pipelines (GitHub) Agile tools (Jira, Confluence) Familiarity with: C#, R, TensorFlow, PyTorch, NumPy Azure Analytics Services (AAS) Experience in: Data pipeline design and optimization AI/ML model integration and LLM training within Databricks Data governance and data quality metrics Cybersecurity and secure solution design Technical documentation and stakeholder communication Advanced data modeling in SSAS, Azure, Databricks Competencies Customer Focus – Builds strong relationships and delivers customer-centric solutions. Global Perspective – Approaches issues with a global lens. Manages Complexity – Resolves complex and sometimes contradictory data-driven problems. Manages Conflict – Handles challenging interpersonal situations effectively. Optimizes Work Processes – Continuously improves efficiency and effectiveness of processes. Solution Configuration & Design – Creates, configures, and validates COTS and custom-built solutions using industry-standard tools and practices. Performance Tuning – Optimizes systems and applications for maximum performance and scalability. Solution Validation Testing – Ensures solutions meet business and technical requirements through structured testing. Values Differences – Recognizes and leverages diverse perspectives and backgrounds. Qualifications Qualifications College, university, or equivalent degree in Computer Science, Information Technology, Business, or a related field is required. Relevant industry certifications are a plus. May require licensing for compliance with export controls or sanctions regulations. Job Systems/Information Technology Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2414669 Relocation Package No Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Position: Data Analyst Location: Gurgaon Timings: 12:00 PM to 10:00 PM Role Overview Doing independent research, analyze, and present data as assigned Expected to work in close collaboration with the EXL team and clients on Commercial insurance actuarial projects for US/UK markets Should be able to understand risk and underwriting , plicate rating methodology Develop and use collaborative relationship to facilitate the accomplishment of working goals Working experience in P&C insurance domain for US insurance markets is a must Excellent written and verbal communication skills Facilitate data requirements while working with actuaries Have excellent SQL skills to extract data for scheduled processes and adhoc requests Automate manual processes and ETL pipelines using Python Utilise/help migrate existing SAS processes from SAS to SAS Viya Key Responsibilities Collaborate with actuaries to understand their data and reporting needs related to premium, loss, and exporsure analysis. Build and optimize complex SQL queries to extract, join, and aggregate large datasets from multiple relational sources. Develop and automate data pipelines in Python for ETL ,data wrangling , and wexploratory analytics. Use SAS for legacy processes, statistical outputs, and ad hoc data manipulation as required by actuarial models/processes Validate data outputs for accuracy and consistency, troubleshoot discrepancies, and ensure data quality before delivery Create documentation of data logc, process flows, and metadata over confluence and SharePoint to ensure transparency and knowledge sharing. Contribute to continuous improvement by recommending process automation or optimization opportunities in existing workflows Support Dashboarding or visualization needs(optional) using tools like Powe BI. Work in an agile or iterative environment with clear communication or progress, blockers and timelines. Required Skillset SQL(Expert Level) : Complex Joins, subqueries, window functions, CTEs, query Optimization and performance tuning, working with large tables in cloud/on-premise environments( Teradata, SQL Server, or equivalent) Python( intermediate to expert): Data wrangling using pandas, NumPy, Script automation and API Consumption, Familiarity with Visual Studio, Jupyter and modular Python Scripting SAS(Intermediate): Reading/ writing from/to datasets, connecting with external sources, macros, PROC SQL Knowledge of AWS is preferred Experience with commercial insurance Understanding of actuarial concepts such as loss triangles, reserving, and pricing Exposure to Git, JIRA, Confluence Proficiency in Excel, VBA Macros(Preferred) Candidate Profile Bachelor’s/Master's degree in engineering, economics, mathematics, actuarial sciences, or similar technical degree. Master’s in business or financial management is also suitable Affiliation to IAI or IFOA, with at least 3 actuarial exams 3-8 years’ experience in data analytics in insurance or financial service industry with good understanding of actuarial concepts - pricing, reserving, and/or valuation Demonstrated ability to work with actuarial or statistical teams in delivering high-quality data and insights Strong problem solving attitude and comfort with ambiguity in requirements Strong ability to learn technical and business knowledge Outstanding written and verbal communication skills Excellent time and work management skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 15 Lacs

Chennai

Work from Office

Naukri logo

Greetings from Newt Global Good day!! Job Title: Senior Python developer Experience: 2+ years Location: Chennai Work location: Chennai (5 days WFO) Domain: Product development Notice Period: Immediate to 15 days Key Responsibilities: Strong understanding of Python frameworks (Flask) and libraries (e.g., Pandas, NumPy, Scikit-learn). Proficiency in SQL and experience with databases like PostgreSQL/ Oracle (any other SQL databases) Experience with cloud services (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Added Advantage Demonstrated problem solving /Algorithms / Design patterns / Hackathon etc Good to have Machine learning, AI, Data science exposure (Scikit-learn, TensorFlow, PyTorch) Product development experience is preferred Qualifications: 2+ years of professional experience in Python backend development. Experience with RESTful APIs and microservices architecture. Bachelor's degree in Computer Science, Computer Engineering, or a related field. Interested Candidates Share your updated CV " elakkiyam@newtglobalcorp.com" Regards, Elakkiya M ________________________________________ Sr Technical Recruiter | Newt Global Email: elakkiyam@newtglobalcorp.com Join my professional network by clicking here: https://www.linkedin.com/in/lucky157

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Role: AI Engineer Experience: 5-8 years Location: Bangalore, India Type of Employment: Full-time, Hybrid About Cognizant Join a rapidly growing consulting and IT services Fortune 500 company with around 350,000 employees worldwide, a very flexible international business, customers that are leaders in their respective sectors, and a high level of commitment. About CAIL/ Evolutionary AI Team (LEAF) The Cognizant AI Labs were created to pioneer scientific innovation and bridge the gap to commercial applications. The AI Labs collaborate with institutions, academia and technology partners to develop groundbreaking AI solutions responsibly. The Lab’s mission is to maximize human potential with decision AI, a combination of multi-agent architectures, generative AI, deep learning, evolutionary AI and trustworthy techniques to create sophisticated decision-making systems. Through scientific publications, open-source software, AI for Good projects and the Cognizant Neuro® AI decisioning platform and Multi-Agent Accelerator, the AI Labs support our goal of improving everyday life. Your role You are an experienced data scientist and Python developer with deep experience in multiple technologies; you are driven, curious and passionate about your work; you are innovative, creative and focused on excellence; and you want to be part of an ego-free work environment where we value honest, healthy interactions and collaboration. Key Responsibilities Be customer advocate that implement our Neuro AI offering. Design, develop, and implement advanced AI models, with a focus on generative AI, agents, and foundation models for enterprise Build and maintain AI pipelines, experimentation frameworks, and reusable components for rapid prototyping and scaling. Experiment with multi-agent systems framework to create context-aware, goal driven agents integrated into workflows. Consistently deliver high-quality software and seek and develop innovative methods and processes and keep up to date on latest trends, tools and advancements in AI in the area of Evolutionary AI Collaborate with cross-disciplinary teams to conceptualize and develop AI based solutions and demos. Contribute to research publication of research and actively engage in open-source communities Assist in preparing internal technical documents, presentations, etc. Work in a highly collaborative, fast-paced environment with your peers on the Cognizant AI Labs team Your Profile Bachelor’s/Master’s in Computer Science, Data Science, or related Engineering field with emphasis on AI/EA, software development and algorithm design. 3+ years of experience in AI/ML engineering role preferred in a enterprise, research or innovation lab setting. Demonstrated experience with AI frameworks like LangChain, HuggingFace, OpenAI, Anthropic etc. Strong programming and engineering skills in Python, Numpy, Pandas, Keras, TensorFlow, PyTorch, Scikit-learn, Jupyter Notebooks, GitHub etc. Solid understanding of AI agent frameworks and strong interest in agentic AI research Active contributor to GitHub or similar platforms demonstrating AI projects, open-source contributions or published research. Ability to rapidly work with minimal supervision on projects involving both small and large teams Ability to work autonomously and prioritize multiple tasks Cloud environments for processing big data (AWS, Azure, Google cloud) Deep knowledge of relational, structured, semi-structured and unstructured data systems, and the experience to know when to use each type of system. Familiarity with modern commercial software development: unit tests, code reviews, secure and clean code. Excellent verbal and written communication skills in English Preferred/Highly Desirable Qualifications Experience with RESTful services, JSON and XML metadata, Docker containers. Experience with machine learning, evolutionary algorithms a plus. Knowledge of and an appreciation for open source projects, design patterns, and enterprise architecture patterns. Research mindset with experience in publishing research work Experience in working in IT service, consulting, or client-facing roles Familiarity with cloud platforms and enterprise AI solution deployment Continuous learner with demonstrated passion for R&D and think long-term. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Job Role: Data Scientist Experience: 5-8 years Location: Bangalore, India Employment Type: Full Time, Hybrid. About Cognizant Join a rapidly growing consulting and IT services Fortune 500 company with around 350,000 employees worldwide, a very flexible international business, customers that are leaders in their respective sectors, and a high level of commitment. Cognizant Advanced AI Lab / Neuro AI team The Cognizant AI Labs were created to pioneer scientific innovation and bridge the gap to commercial applications. The AI Labs collaborate with institutions, academia and technology partners to develop groundbreaking AI solutions responsibly. The Lab’s mission is to maximize human potential with decision AI, a combination of multi-agent architectures, generative AI, deep learning, evolutionary AI and trustworthy techniques to create sophisticated decision-making systems. Through scientific publications, open-source software, AI for Good projects and the Cognizant Neuro® AI decisioning platform and Multi-Agent Accelerator, the AI Labs support our goal of improving everyday life. Your Role As a data scientist and software engineer you will develop the Neuro AI platform and use it on a variety of projects related to optimizing data-driven decision making. You are a data scientist, Python developer and AI researcher with knowledge in multiple technologies; you are driven, curious and passionate about your work; you are innovative, creative and focused on excellence; and you want to be part of an ego-free work environment where we value honest, healthy interactions and collaboration. Key Responsibilities Design, implement and deploy software applications that analyze datasets, train predictive and prescriptive models, assess uncertainties and interactively present results to end users Monitor and analyze the performance of software applications and infrastructure Collaborate with cross-functional teams to identify and prioritize business requirements Research, design and implement novel AI systems to support decision-making processes Work with the research team to publish papers and patents Communicate research findings to both technical and non-technical audiences Provide guidance on our Neuro AI offering and AI best practices Work in a highly collaborative, fast-paced environment with your peers on the Neuro AI platform and research teams Your Profile PhD or Master’s in Data Science, Computer Science, Statistics, Mathematics, Engineering or related field 5-8 years of experience in artificial intelligence, machine learning, data science and software engineering Strong programming skills in Python with Pandas, Numpy, TensorFlow, PyTorch, Jupyter Notebook, GitHub Experience with handling large datasets, data engineering, statistical analysis, and building predictive models Experience developing AI software platforms and tools Knowledge of data visualization tools (e.g. Matplotlib, Tableau, …) Knowledge of Generative AI and LLMs is a plus Utilize cloud platforms for data processing and analytics, optimize cloud-based solutions for performance, cost, and scalability Strong problem-solving and analytical skills Strong attention to detail and ability to work independently Ability to leverage design thinking, business process optimization, and stakeholder management skills Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 17 Lacs

Chennai

Work from Office

Naukri logo

Greetings from SwaaS! Role: ETL Developer Experience: 5+ years Location: Guindy, Chennai (On-site) Immediate Joiners preferred Mandatory Skills: 8+ years in ETL development, with 4+ years of hands-on AWS PySpark scripting. Strong experience in AWS services: S3, Lambda, SNS, Step Functions. Expertise in PySpark and Python (NumPy, Pandas). Ability to work independently as an individual contributor. Solid understanding of AWS-based data pipelines and solutions. Good to Have: Experience processing large volumes of semi-structured and structured data. Familiarity with building data lakes and Delta Lake configurations. Knowledge of metadata management, data lineage, and data governance principles. Proficiency in cost-efficient computing and building scalable data integration frameworks. Experience with MWAA (Managed Workflows for Apache Airflow). Soft Skills: Strong communication skills for engaging with IT and business stakeholders. Ability to understand challenges and drive effective data delivery.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Direct Responsibilities Understand business requirement from business analyst, users and should have analytical mind to understand existing process and purpose better solutions Work on TSD designs, development, testing, deployment, support Suggest and implement innovative approach. Should be adaptable to new technology or methodology Contributing Responsibilities Contribute towards knowledge sharing initiatives with other team members Contribute to documentation of solution and configurations of the models Technical & Behavioral Competencies Mandatory 3+ years of experience in Corporate and Institutional Banking IT, with a full understanding of the Corporate Banking and/or Securities Services activity. Good understanding of AML monitoring tools and data needed for AML detection models. Good understanding of Data Analysis and Data Mapping processes. Extensive experience in working with functional and technical teams, defining requirements (mainly technical specification), establishing technical strategies, and leading the full life cycle delivery of projects. Experience in Data-Warehouse architectural design providing efficient solutions in Compliance AML data domains. Experience in Python developments. Excellent communication skills with the ability to explain complex technical issues in a simple concise manner. Strong coordination and organizational skills. Multi-tasking capabilities Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Expertise in Oracle database and SQL programming Proficiency in writing Python programming Good knowledge for development using Python data frames Working experience on PySpark Strong problem-solving skills Excellent communication and collaboration skills Ability to work independently and as part of a team Experience in implementing DevOps practices Experience with data modeling and database design Implementation knowledge in EDA, Machine learning (models as linear regression, logistic regression/classification, KNN, SVM etc.), Deep learning (NLP, Computer vision, GAN, Neural networks), Reinforcement learning, Statistical learning, MLOps is added advantage Knowledge of data visualization Stay up-to-date with emerging trends and technologies in data engineering and recommend new tools and techniques to improve our data infrastructure Good knowledge on ELK, Scala programming, Kubernetes/Docker. Have knowledge of data visualization Preferred candidate profile

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Engineering Manager - Intelligent Automation We are looking for an enthusiastic RPA & Intelligent Automation professional to join the ranks of our newly founded CoE. Help us meet increasing demand from the business, support our rapidly growing portfolio of automation and make an impact across every business area at Booking.com. We look at our team as a service provider for the entire company, operating with a large degree of autonomy and enterpreneurship B.Responsible Naturally oriented towards improving efficiencies Seeking accountability from themselves and others Compassionate collaborator with a deep sense of comradery Willingness to be cross-functional, pick up new skills and cover new ground with/for the team Striving for continuous improvement and high quality in their work Strong work ethic and high spirit Keen to understand and solve real world problems through technology B.Skilled Technology : 10+ years of experience developing in Blue Prism CS, Engineering or similar university background is a MUST HAVE Blue Prism certification is a MUST HAVE Knowledge of Blue Prisms architectural/infrastructure components Proficiency in core Python libraries like pandas,NumPy etc Exposure to AI/ML frameworks like Tensorflow,PyTorch,scikit learn etc Understanding of NLP techniques like text summarization,sentiment analysis,Named entity recognition etc is good to have In-depth understanding of AWS components RDS, EC2, S3, IAM, CloudWatch, Lambda,Sagemaker, VPC is good to have Experience with VAULT, PASSPORT, Gitlab for UAM / Config Management Exposure to Terraform code for deploying AWS services is good to have Professional experience with SQL, .NET, C#, HTTP APIs and Web Services Experience designing, developing, deploying and maintaining software Experience working in a scrum/agile environment Excellent communication skills in English People Leadership : Inspire and motivate multiple cross technology teams Excellent communicator with strong stakeholder management experience, good commercial awareness and technical vision You are a humble and thoughtful technology leader, you lead by example and gain your teammates respect through actions, not the title Directly manage engineers Lead by example by taking ownership, being proactive and collaborating Nurture, grow and develop engineering talent in the team Foster a great culture that innovates, work together as a team, partner with other Booking.com com teams and roles and celebrates unified success Respect the Booking.com values and use them as a guide to the way we work. Architecture & Product Strategy : Contribute to product strategy for Automation systems architecture Define, shape and deliver the roadmap Build new products, processes and operational plans Negotiate on the strategic importance of own product roadmap features Drive innovation in the team Collaborate with technical leads, architects, product managers and key individuals B.offered Contributing to a high scale, complex, world-renowned product and seeing real-time impact of your work Working in a fast-paced and performance-driven culture Career advancement via online and on-the-job training, Hackathons, conferences and active community participation Competitive compensation and benefits package and some great added perks of working in the home city of Booking.com

Posted 1 week ago

Apply

4.0 - 6.0 years

7 - 10 Lacs

Ahmedabad, Thaltej

Work from Office

Naukri logo

AI/ML Development: 1. Model Development and Training: - Design, develop, and train machine learning models and algorithms to solve specific business problems. - Implement various machine learning techniques including supervised, unsupervised, and reinforcement learning. 2. Data Preprocessing and Analysis: - Perform data cleaning, transformation, and preprocessing to prepare data for model training. - Analyze and interpret complex data sets to identify patterns and trends. - NLP, nltk and NER are preferred. 3. Model Evaluation and Optimization: - Evaluate model performance using appropriate metrics and fine-tune models to improve accuracy and efficiency. - Implement model optimization techniques such as hyperparameter tuning and feature engineering. 4. Deployment and Maintenance: - Deploy machine learning models into production environments ensuring scalability and reliability. - Monitor and maintain models to ensure they continue to perform well over time. Python Development: Software Development: - Write clean, maintainable, and efficient Python code. - Develop and maintain Python-based applications, APIs(Flask and Fast API preferred) , and microservices. Code Review and Mentorship: - Conduct code reviews to ensure adherence to best practices and coding standards. - Mentor junior developers and provide guidance on best practices and development techniques. Integration and Testing: - Integrate machine learning models with existing systems and applications. - Develop and execute unit tests, integration tests, and end-to-end tests to ensure software quality. Automation and Scripting: - Automate repetitive tasks and workflows using Python scripts. - Develop and maintain automation tools and frameworks. Collaboration and Communication: - Collaborate with data scientists, data engineers, product managers, and other stakeholders to understand requirements and deliver solutions. - Participate in design and architecture discussions to shape the direction of projects. Documentation and Reporting: - Document code, processes, and methodologies to ensure knowledge sharing and maintainability. - Communicate findings, progress, and results to stakeholders through reports and presentations. Continuous Learning and Improvement: 1. Stay Updated with Industry Trends: - Keep up-to-date with the latest developments in AI/ML and Python technologies. - Experiment with new tools, libraries, and frameworks to improve existing solutions and processes. Problem Solving and Innovation: 1. Innovative Solutions: - Identify opportunities to apply AI/ML techniques to solve new and existing problems. - Propose and implement innovative solutions to improve business processes and outcomes. 2. Technical Challenges: - Tackle complex technical challenges and provide effective solutions. - Troubleshoot and resolve issues related to machine learning models and Python applications. Mandatory Skills: Strong proficiency in Python and its libraries (e.g., NumPy, Pandas, Scikit-Learn, TensorFlow, PyTorch). NLP, nltk and NER hand on knowledge is required. Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of software development principles, design patterns, and best practices. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Continuous learning mindset and adaptability to new technologies.

Posted 1 week ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

Position overview: We are seeking a ML Engineer with a strong background in Machine Learning, Natural Language Processing (NLP), Generative AI, and Retrieval-Augmented Generation (RAG). The ideal candidate will possess 1+ years of hands-on experience in developing and deploying advanced data-driven solutions. You will play a key role in our AI-CoE team, contributing to cutting-edge projects that drive innovation and business value. A special focus area for this role would be to build AI enabled products that would result in the creation of monetizable product differentiators for Tata Communications products and services. Detailed job description & Key Responsibilities: Design, develop, and deploy machine learning systems, including Generative AI models and LLMs. Research and implement state-of-the-art ML algorithms and tools. Conduct data preprocessing, feature engineering, and statistical analysis. Train, fine-tune, and optimize machine learning models for performance and accuracy. Collaborate with cross-functional teams, including data scientists, software engineers, and domain experts. Extend existing ML frameworks and libraries to meet project requirements. Stay updated with the latest advancements in machine learning and AI. Skills: Working knowledge of machine learning and deep learning skills. Strong knowledge of programming knowledge Python, SQL and commonly used frameworks & tools PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. - must have Experience in handling databases (SQL and NoSQL) Exposure to MLFlow, KubeFlow, Git CI/CD Experience with containerization tools like Docker, and orchestration tools like Kubernetes Excellent problem-solving skills and a proactive attitude. Strong communication and teamwork abilities. Ability to manage multiple projects and meet deadlines Interview will involve coding test

Posted 1 week ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

Position overview: We are seeking a ML Engineer with a strong background in Machine Learning, Natural Language Processing (NLP), Generative AI, and Retrieval-Augmented Generation (RAG). The ideal candidate will possess 1+ years of hands-on experience in developing and deploying advanced data-driven solutions. You will play a key role in our AI-CoE team, contributing to cutting-edge projects that drive innovation and business value. A special focus area for this role would be to build AI enabled products that would result in the creation of monetizable product differentiators for Tata Communications products and services. Detailed job description & Key Responsibilities: Design, develop, and deploy machine learning systems, including Generative AI models and LLMs. Research and implement state-of-the-art ML algorithms and tools. Conduct data preprocessing, feature engineering, and statistical analysis. Train, fine-tune, and optimize machine learning models for performance and accuracy. Collaborate with cross-functional teams, including data scientists, software engineers, and domain experts. Extend existing ML frameworks and libraries to meet project requirements. Stay updated with the latest advancements in machine learning and AI. Skills: Working knowledge of machine learning and deep learning skills. Strong knowledge of programming knowledge Python, SQL and commonly used frameworks & tools PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. - must have Experience in handling databases (SQL and NoSQL) Exposure to MLFlow, KubeFlow, Git CI/CD Experience with containerization tools like Docker, and orchestration tools like Kubernetes Excellent problem-solving skills and a proactive attitude. Strong communication and teamwork abilities. Ability to manage multiple projects and meet deadlines Interview will involve coding test

Posted 1 week ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

Position overview: We are seeking a Data Scientist with a strong background in Machine Learning, Natural Language Processing (NLP), Generative AI, and Retrieval-Augmented Generation (RAG). The ideal candidate will possess 1+ years of hands-on experience in developing and deploying advanced data-driven solutions. You will play a key role in our AI-CoE team, contributing to cutting-edge projects that drive innovation and business value. A special focus area for this role would be to build AI enabled products that would result in the creation of monetizable product differentiators. Detailed job description & Key Responsibilities: Develop, Test, and Deploy machine learning models for various business and Telco use cases. Perform data preprocessing, feature engineering and ML/DL model evaluation. Optimize and fine-tune models for performance and scalability. Good understanding of NLP concepts and projects involving entity recognition, text classification, and language modelling like GPT/Llama/Claude/Grok Build and refine RAG models to improve information retrieval and answer generation systems. Integrate RAG methods into existing applications to enhance data accessibility and user experience. Work closely with cross-functional teams including software engineers, product managers, and domain experts. Communicate technical concepts to non-technical stakeholders effectively. Document processes, methodologies, and model development for internal and external stakeholders. Skills: Strong knowledge of probability and statistics. Working knowledge of machine learning and deep learning skills. Strong knowledge of programming knowledge Python, SQL and commonly used frameworks & tools PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. Excellent problem-solving skills and a proactive attitude. Strong communication and teamwork abilities. Ability to manage multiple projects and meet deadlines Interview will involve coding tests.

Posted 1 week ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

Position overview: We are seeking a Data Scientist with a strong background in Machine Learning, Natural Language Processing (NLP), Generative AI, and Retrieval-Augmented Generation (RAG). The ideal candidate will possess 1+ years of hands-on experience in developing and deploying advanced data-driven solutions. You will play a key role in our AI-CoE team, contributing to cutting-edge projects that drive innovation and business value. A special focus area for this role would be to build AI enabled products that would result in the creation of monetizable product differentiators. Detailed job description & Key Responsibilities: Develop, Test, and Deploy machine learning models for various business and Telco use cases. Perform data preprocessing, feature engineering and ML/DL model evaluation. Optimize and fine-tune models for performance and scalability. Good understanding of NLP concepts and projects involving entity recognition, text classification, and language modelling like GPT/Llama/Claude/Grok Build and refine RAG models to improve information retrieval and answer generation systems. Integrate RAG methods into existing applications to enhance data accessibility and user experience. Work closely with cross-functional teams including software engineers, product managers, and domain experts. Communicate technical concepts to non-technical stakeholders effectively. Document processes, methodologies, and model development for internal and external stakeholders. Skills: Strong knowledge of probability and statistics. Working knowledge of machine learning and deep learning skills. Strong knowledge of programming knowledge Python, SQL and commonly used frameworks & tools PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. Excellent problem-solving skills and a proactive attitude. Strong communication and teamwork abilities. Ability to manage multiple projects and meet deadlines Interview will involve coding tests.

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 30 Lacs

Chennai

Work from Office

Naukri logo

Key responsibilities: Be an expert in Python, with solid working knowledge of Python web frameworks such as Django, Flask, Pandas, NumPY, etc. Good familiarity with ORM (Object Relational Mapper) libraries. Ability to integrate multiple data sources and databases into one system. Understanding of the threading limitations of Python and multi-process architecture. Good understanding of server-side templating languages such as Jinja 2, Mako, etc. Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Understanding of accessibility and security compliance. Good knowledge of user authentication and authorization between multiple systems, servers and environments. Understanding of fundamental design principles behind a scalable application. Familiarity with event-driven programming in Python. Understanding the differences between multiple delivery platforms, such as mobile vs. desktop, and optimizing output to match the specific platform. Able to create database schemas that represent and support business processes. Strong unit testing and debugging skills. Proficient understanding of code versioning tools such as Git, Mercurial or SVN. Requirements: Strong proficiency in Python and experience with web frameworks such as Django or Flask. Solid understanding of ORM libraries, server-side templating (e.g., Jinja2), and REST API development. Experience integrating multiple data sources, working with Pandas, NumPy, and managing relational databases. Good knowledge of authentication, authorization, and general security best practices in web applications. Familiarity with Docker, cloud platforms (AWS or Azure), and message queues like Kafka or SQS. Proficient in unit testing, debugging, and using version control tools such as Git. Excellent communication skills and the ability to collaborate effectively in remote, Agile teams.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. Overview As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products and space systems for customers in more than 150 countries. As a top U.S. exporter, the company leverages the talents of a global supplier base to advance economic opportunity, sustainability and community impact. Boeing’s team is committed to innovating for the future, leading with sustainability, and cultivating a culture based on the company’s core values of safety, quality and integrity. Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts – enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping people’s careers and being thoughtful about employee wellbeing. Position Overview: Boeing Test and Evaluation team is currently looking for Associate BI Analyst to join their team in Bengaluru, KA. BI Analyst at Boeing make sure that products at the world’s largest aerospace company continue to meet the highest standards. From quality and reliability to safety and performance, their expertise is vital to the concept, design and certifications of a wide variety of commercial and military systems. Position Responsibilities Develop and maintain high-quality Python code for data analysis and data visualization. Utilize Tableau to create interactive and insightful dashboards and reports for data-driven decision-making. Collaborate closely with data scientists and business analysts to understand requirements and translate them into effective software and visualization solutions. Participate in code reviews, ensuring adherence to best practices and standards. Optimize existing algorithms and systems for improved performance and scalability. Contribute to the integration of machine learning models into production systems. Troubleshoot and resolve issues related to data quality, performance, and visualization. Stay abreast of new technologies and methodologies in software development, data science, and business intelligence. Document software developments and maintain software documentation. Prepare data for analysis, including cleansing, conditioning, transforming, handling missing fields, identifying new feature variables, and handling multivariate data. Monitor production and deployment. Prepare decision support visualizations and reports, algorithms, models, dashboards, and/or tools. Support the development of software applications integrated with insights obtained from data science and business analysis activities. Employer will not sponsor applicants for employment visa status. Basic Qualifications (Required Skills/Experience) Bachelor's degree in Computer Science, Engineering, Business Analytics, or a related field or higher is required. Minimum 5+ years of professional experience in Python development and Tableau. Position requires hands-on experience in working with SQL, Python, R, Data modeling, and Tableau. Solid understanding of software development principles and lifecycle. Familiarity with data structures, algorithms, system design, and business intelligence concepts. Experience with Python libraries such as NumPy, Pandas, Matplotlib, and Scikit-learn. Knowledge of version control systems, preferably Git. Strong problem-solving skills and ability to work in a team environment. Excellent verbal and written communication skills. Preferred Qualifications (Desired Skills/Experience) Candidate must be a self-starter with a positive attitude, high ethics, and a track record of working independently in developing the analytics solutions Must be able to work collaboratively with very strong teaming skills. Must be willing to work flexible hours (early or late as needed) to interface with Boeing personnel around the world. Develop and maintain relationships / partnerships with customers, stakeholders, peers, and partners to develop collaborative plans and execute on projects. Proactively seek information and direction to successfully complete the statement of work. Demonstrate strong written, verbal and interpersonal communication skills. Be fluent in written and spoken English, and have high degree of proficiency with MS Office tools to prepare comprehensive reports, presentations, proposals, and Statements of Work. Preferred experience in handling engineering data sets such as component failure data, engineering production process data, engineering test data, time series data etc. Preferred experience in deploying data science solutions on cloud platforms like PCF, GCP, etc Experience in C# Language or ReactJS is a huge plus. Typical Education & Experience Bachelor or Master degree in Computer Science/ Engineering (Software / Instrumentation / Electronics / Electrical / Mechanical or equivalent discipline) Relocation This position offers relocation based on candidate eligibility within India. Applications for this position will be accepted until Jun. 06, 2025 Export Control This is not an Export Control position. Education Bachelor's Degree or Equivalent Required Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India) Equal Opportunity Employer: We have teams in more than 65 countries, and each person plays a role in helping us become one of the world’s most innovative, diverse and inclusive companies. We are proud members of the Valuable 500 and welcome applications from candidates with disabilities. Applicants are encouraged to share with our recruitment team any accommodations required during the recruitment process. Accommodations may include but are not limited toconducting interviews in accessible locations that accommodate mobility needs, encouraging candidates to bring and use any existing assistive technology such as screen readers and offering flexible interview formats such as virtual or phone interviews.

Posted 1 week ago

Apply

9.0 - 14.0 years

17 - 32 Lacs

Bengaluru

Remote

Naukri logo

Job Title Senior Backend Engineer Python & Microservices (IC Role) Experience Required: 10+ years Industry: SaaS / Energy / Mobility / Cloud Infrastructure Location Remote Shift Time: 5 hours overlap with PST timings About the Role: We are seeking an experienced and hands-on Senior Backend Engineer with strong expertise in Python , microservices , and API architecture to help build scalable cloud-native systems. In this individual contributor (IC) role, you will take ownership of backend service design and implementation for large-scale enterprise platforms. You'll work closely with product managers, DevOps engineers, and cross-functional developers to ship performant, production-grade systems. Key Responsibilities Architect, design, and implement distributed, microservice-based applications using Python and cloud-native tools. Build and scale RESTful APIs , async jobs, background schedulers, and data pipelines for high-volume systems. Lead complex PoC initiatives , system architecture discussions, and design reviews. Create and optimize NoSQL and SQL data models (MongoDB, DynamoDB, PostgreSQL, ClickHouse). Design highly available services and implement robust logging, monitoring, and alerting using tools like CloudWatch, Grafana, and Datadog. Collaborate on CI/CD pipelines and cloud infrastructure automation using Terraform, GitHub Actions, or Jenkins . Ensure security, scalability, and fault-tolerance in backend implementations. Contribute to internal documentation, architecture diagrams, and technical knowledge sharing. Take full lifecycle ownership of the services you buildfrom design to deployment to debugging in production. Requirements 10+ years of professional software development experience, with a focus on backend systems Deep hands-on experience with Python and related frameworks (e.g., Flask, FastAPI, Django) Proven expertise in microservices architecture , containerization (Docker, Kubernetes), and cloud-native app development (AWS preferred) Strong understanding of API design , rate limiting, secure auth (OAuth2), and best practices Experience with message queues and event-driven systems (Kafka, SQS, RabbitMQ) Strong working knowledge of both SQL and NoSQL databases (PostgreSQL, MongoDB, DynamoDB) Familiar with DevOps tools and pipelines: GitHub Actions, Jenkins, Terraform, CloudFormation Strong communication skills and ability to work in fast-paced, distributed teams Bonus: Experience with AI/ML integrations, ticketing systems (Zendesk), or chat platforms (Openfire) Preferred Qualifications Bachelors or Masters degree in Computer Science, Engineering, or a related field Certifications in System Design or Cloud Architecture Experience contributing to large-scale digital transformations or enterprise platform rewrites Interested candidates can also share their cv at akanksha.s@esolglobal.com

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Gurugram

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Design, Develop and Deploy ML powered products and pipelines Play a central role in all stages of the data science project life cycle, including: Identification of suitable data science project opportunities Partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Evaluation/interpretation of results and presentation to business leaders Performing exploratory data analysis, proof-of-concept modelling, model benchmarking and setup model validation experiments Training large models both for experimentation and production Develop production ready pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Spearhead deployment and model scaling strategies Stakeholder management and representing the team in front of our leadership Leading and mentoring by example including project scrums What We’re Looking For 7+ years of professional experience in Data Science domain Expertise in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Expertise in probabilistic machine learning model for classification, regression & clustering Strong experience in feature engineering, data preprocessing, and building machine learning models for large datasets. Exposure to Information Retrieval, Web scraping and Data Extraction at scale OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Prior work to show on Github, Kaggle, StackOverflow etc. Cloud expertise (AWS and GCP preferably) Expertise in deploying machine learning models in cloud environments Familiarity in working with LLMs What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Gurugram

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Design, Develop and Deploy ML powered products and pipelines Play a central role in all stages of the data science project life cycle, including: Identification of suitable data science project opportunities Partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Evaluation/interpretation of results and presentation to business leaders Performing exploratory data analysis, proof-of-concept modelling, model benchmarking and setup model validation experiments Training large models both for experimentation and production Develop production ready pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Spearhead deployment and model scaling strategies Stakeholder management and representing the team in front of our leadership Leading and mentoring by example including project scrums What We’re Looking For 3+ years of professional experience in Data Science domain Expertise in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Expertise in probabilistic machine learning model for classification, regression & clustering Strong experience in feature engineering, data preprocessing, and building machine learning models for large datasets. Exposure to Information Retrieval, Web scraping and Data Extraction at scale OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Prior work to show on Github, Kaggle, StackOverflow etc. Cloud expertise (AWS and GCP preferably) Expertise in deploying machine learning models in cloud environments Familiarity in working with LLMs What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Data Science Intern (Remote) Company: Coreline Solutions Location: Remote / Pune, India Duration: 3 to 6 Months Stipend: Unpaid (Full-time offer based on performance) Work Mode: Remote About Coreline Solutions We’re a tech and consulting company focused on digital transformation, custom software development, and data-driven solutions. Role Overview We’re looking for a Data Science Intern to work on real-world data projects involving analytics, modeling, and business insights. Great opportunity for students or freshers to gain practical experience in the data science domain. Key Responsibilities Collect, clean, and analyze large datasets using Python, SQL, and Excel. Develop predictive and statistical models using libraries like scikit-learn or statsmodels. Visualize data and present insights using tools like Matplotlib, Seaborn, or Power BI. Support business teams with data-driven recommendations. Collaborate with data analysts, ML engineers, and developers. Requirements Pursuing or completed degree in Data Science, Statistics, CS, or related field. Proficient in Python and basic understanding of machine learning. Familiarity with data handling tools (Pandas, NumPy) and SQL. Good analytical and problem-solving skills. Perks Internship Certificate Letter of Recommendation (Top Performers) Mentorship & real-time project experience Potential full-time role To Apply Email your resume to 📧 hr@corelinesolutions.site Subject: “Application for Data Science Intern – [Your Full Name]” Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Machine Learning / Data Science Engineer/Data Scientists Location: Pune Experience Required: 3–6 years Type: Full-Time Education: BTech / MTech / MSc / PhD in Computer Science, Data Science, Applied Mathematics, Statistics, or a related field. About Anervea.ai Anervea.ai is building a next-generation intelligence stack for the pharmaceutical industry. Our products help commercial, clinical, and medical affairs teams make smarter decisions—faster. From predicting the success of clinical trials and decoding competitor movement, to surfacing real-time KOL signals and automating HCP engagement, our platform powers strategic decision-making at scale. We’re not a services firm—we’re a product-first, AI-native company solving real problems using applied machine learning, generative AI, and life sciences data. Our clients include major US and EU pharma companies, and our team is a mix of engineers, researchers, and life science domain experts. We’re looking for ML engineers and data scientists who are passionate about learning, driven to build usable solutions , and ready to push boundaries. Role Overview As an ML / Data Science Engineer at Anervea, you’ll work on designing, training, deploying, and maintaining machine learning models across multiple products. You’ll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. You’ll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering , model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomes—not just code Bonus Qualities Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Mumbai

Hybrid

Naukri logo

Job type: Contract to Hire Job Description:- Python Data side developers(Pandas, Numpi, SQL, data pipelines,etc.) 5-7 years of experience as a Python Developer with below skills. Snowflake exposure Building API using python Microservice, API Gateway, Authentication Oauth2&mTLS Web service development Unit Testing, Test Driven Development Multi-tier web or desktop application development experience Application container Docker Linux experience, Python virtual environment Tools: Eclipse IDE/IntelliJ, GIT, Jira

Posted 1 week ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies