Home
Jobs

2323 Numpy Jobs - Page 41

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

3 - 8 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Data Science Hire Type: Employee Job ID 8753 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: As a Data Science Staff member located in Hyderabad, you are a visionary with a passion for data engineering and analytics. You thrive in dynamic environments and are motivated by the challenge of building robust data infrastructure. Your expertise in data modeling, algorithm development, and data pipeline construction is complemented by your ability to derive actionable insights from complex datasets. You possess a deep understanding of modern data stack tools and have hands-on experience with cloud data warehouses, transformation tools, and data ingestion technologies. Your technical acumen is matched by your ability to collaborate effectively with cross-functional teams, providing support and guidance to business users. You stay ahead of the curve by continuously exploring advancements in AI, Generative AI, and machine learning, seeking opportunities to integrate these innovations into your work. Your commitment to best practices in data management and your proficiency in various scripting languages and visualization tools make you an invaluable asset to our team. What You’ll Be Doing: Building the data engineering and analytics infrastructure for our new Enterprise Data Platform using Snowflake and Fivetran. Leading the development of data models, algorithms, data pipelines, and insights to enable data-driven decision-making. Collaborating with team members to shape the design and direction of the data platform. Working end-to-end on data products, from problem understanding to developing data pipelines, dimensional data models, and visualizations. Providing support and advice to business users, including data preparation for predictive and prescriptive modeling. Ensuring consistency of processes and championing best practices in data management. Evaluating and recommending new data tools or processes. Designing, developing, and deploying scalable AI/Generative AI and machine learning models as needed. Providing day-to-day production support to internal business unit customers, implementing enhancements and resolving defects. Maintaining awareness of emerging trends in AI, Generative AI, and machine learning to enhance existing systems and develop innovative solutions. The Impact You Will Have: Driving the development of a cutting-edge data platform that supports enterprise-wide data initiatives. Enabling data-driven decision-making across the organization through robust data models and insights. Enhancing the efficiency and effectiveness of data management processes. Supporting business users in leveraging data for predictive and prescriptive analytics. Innovating and integrating advanced AI and machine learning solutions to solve complex business challenges. Contributing to the overall success of Synopsys by ensuring high-quality data infrastructure and analytics capabilities. What You’ll Need: BS with 5+ years of relevant experience or MS with 3+ years of relevant experience in Computer Sciences, Mathematics, Engineering, or MIS. 5 years of experience in DW/BI development, reporting, and analytics roles, working with business and key stakeholders. Advanced knowledge of Data Warehousing, SQL, ETL/ELT, dimensional modeling, and databases (e.g., mySQL, Postgres, HANA). Hands-on experience with modern data stack tools, including cloud data warehouses (Snowflake), transformation tools (dbt), and cloud providers (Azure, AWS). Experience with data ingestion tools (e.g., Fivetran, HVR, Airbyte), CI/CD (GitLab, Kubernetes, Airflow), and data catalog tools (e.g., Datahub, Atlan) is a plus. Proficiency in scripting languages like Python, Unix, SQL, Scala, and Java for data extraction and exploration. Experience with visualization tools like Tableau and PowerBI is a plus. Knowledge of machine learning frameworks and libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch) and LLM models is a plus. Understanding of data governance, data integrity, and data quality best practices. Experience with agile development methodologies and change control processes. Who You Are: You are a collaborative and innovative problem-solver with a strong technical background. Your ability to communicate effectively with diverse teams and stakeholders is complemented by your analytical mindset and attention to detail. You are proactive, continuously seeking opportunities to leverage new technologies and methodologies to drive improvements. You thrive in a fast-paced environment and are committed to delivering high-quality solutions that meet business needs. The Team You’ll Be A Part Of: You will join the Business Applications team, a dynamic group focused on building and maintaining the data infrastructure that powers our enterprise-wide analytics and decision-making capabilities. The team is dedicated to innovation, collaboration, and excellence, working together to drive the success of Synopsys through cutting-edge data solutions. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 weeks ago

Apply

0 years

2 - 4 Lacs

Hyderābād

On-site

Location: Hyderabad, IN Employment type: Employee Place of work: Office Offshore/Onshore: Onshore TechnipFMC is committed to driving real change in the energy industry. Our ambition is to build a sustainable future through relentless innovation and global collaboration – and we want you to be part of it. You’ll be joining a culture that values curiosity, expertise, and ideas as well as diversity, inclusion, and authenticity. Bring your unique energy to our team of more than 20,000 people worldwide, and discover a rewarding, fulfilling, and varied career that you can take in anywhere you want to go. Job Purpose Seeking a skilled Python Developer to join our team and help us develop applications and tooling to streamline in-house engineering design processes with a continuous concern for quality, targets, and customer satisfaction. Job Description 1.Write clean and maintainable Python code using PEP guidelines 2. Build and maintain software packages for scientific computing 3. Build and maintain command line interfaces (CLIs) 4. Build and maintain web applications and dashboards 5. Design and implement data analysis pipelines 6. Create and maintain database schemas and queries 7. Optimise code performance and scalability 8. Develop and maintain automated tests to validate software 9. Contribute and adhere to team software development practices, e.g., Agile product management, source code version control, continuous integration/deployment (CI/CD) 10. Build and maintain machine learning models (appreciated, but not a prerequisite) Technical Stack 1. Languages: Python, SQL 2. Core libraries: Scipy, Pandas, NumPy 3. Web frameworks: Streamlit, Dash, Flask 4. Visualisation: Matplotlib, Seaborn, Plotly 5. Automated testing: pytest 6. CLI development: Click, Argparse 7. Source code version control: Git 8. Agile product management: Azure DevOps, GitHub 9. CI/CD: Azure Pipelines, Github Actions, Docker 10. Database systems: PostgreSQL, Snowflake, SQlite, HDF5 11. Performance: Numba, Dask 12. Machine Learning: Scikit-learn, TensorFlow, PyTorch (Desired) You are meant for this job if: • Bachelor's degree in computer science or software engineering • Master's degree is a plus• Strong technical basis in engineering • Presentation skills • Good organizational and problem-solving skills • Service/Customer oriented • Ability to work in a team-oriented environment • Good command of English Skills Spring Boot Data Modelling CI/CD Internet of Things (IoT) Jira/Confluence React/Angular SAFe Scrum Kamban Collaboration SQL Bash/Shell/Powershell AWS S3 AWS lambda Cypress/Playwright Material Design Empirical Thinking Agility Github HTML/CSS Javascript/TypeScript GraphQL Continuous Learning Cybersecurity Computer Programming Java/Kotlin Test Driven Development Being a global leader in the energy industry requires an inclusive and diverse environment. TechnipFMC promotes diversity, equity, and inclusion by ensuring equal opportunities to all ages, races, ethnicities, religions, sexual orientations, gender expressions, disabilities, or all other pluralities. We celebrate who you are and what you bring. Every voice matters and we encourage you to add to our culture. TechnipFMC respects the rights and dignity of those it works with and promotes adherence to internationally recognized human rights principles for those in its value chain. Date posted: Jun 2, 2025 Requisition number: 13580

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 2 weeks ago

Apply

1.0 years

3 - 6 Lacs

Gurgaon

On-site

Location Gurugram Shift Timings Rotational Shifts Job Description Testing data to ensure all programming instructions and directives have been implemented Downloading, Checking and Formatting Interim and Final data for review and delivery in different formats Programming data validation scripts using Python Skills Required Knowledge of Python, Numpy, Pandas (must) Should be available during day and night shifts (US hours) and over the weekend / extended hours, if required Ability to handle multiple projects and to prioritize, identify and solve problems individually Qualifications and Experience Min. 1 year – currently working experience of programming using Python Experience / familiarity with programming, data validation/cleansing and basic stat concepts

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali

On-site

Chicmic Studios Job Role: Data Scientist Experience Required: 3+ Years Skills Required: Data Science, Python, Pandas, Matplotlibs Job Description: We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus. Roles & Duties: Analyze and process large datasets using Python and Pandas. Develop and optimize machine learning models for predictive analytics. Create data visualizations using Matplotlib and Seaborn to support decision-making. Perform data cleaning, feature engineering, and statistical analysis. Work with structured and unstructured data to extract meaningful insights. Implement and fine-tune NER models for specific use cases (if required). Collaborate with cross-functional teams to drive data-driven solutions Required Skills & Qualifications: Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.). Experience in data analysis, statistical modeling, and machine learning. Hands-on expertise in data visualization using Matplotlib and Seaborn. Understanding of SQL and database querying. Familiarity with NLP techniques and NER models is a plus. Strong problem-solving and analytical skills. Contact: 9875952836 Office Address: F273, Phase 8B industrial Area, Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Job Description Vice President, Data Management & Quantitative Analysis I At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world’s financial system we touch nearly 20% of the world’s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what is all about. We’re seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team. This role is located in Pune, MH or Chennai, TN (Hybrid). In this role, you’ll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (“DART”) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation. The team works closely with various groups across BNY to support the firm’s Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test. The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting. For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework. Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency. To be successful in this role, we’re seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask. Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc.), clustering (k-means, etc.), and time series analysis (ARIMA, EWMA, GARCH, etc.) is a plus. Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 - 0 Lacs

India

On-site

Job Description: We are seeking a knowledgeable and enthusiastic Python Trainer to join our training team. The ideal candidate should have a strong background in Python programming and experience in teaching or mentoring. The trainer will be responsible for delivering high-quality, hands-on training to individuals or corporate learners and helping them become proficient in Python for software development, data analysis, automation, or machine learning. Roles and Responsibilities: Conduct in-person or online training sessions on Python for beginners to advanced learners. Teach core Python topics: variables, data types, control structures, functions, OOP, file handling, and exception handling. Introduce advanced Python topics like modules and packages, decorators, generators, regular expressions, multithreading, and Pythonic best practices. Web development (using Flask or Django) Data science (using Pandas, NumPy, Matplotlib) Machine learning (using Scikit-learn, TensorFlow, etc.) – if applicable Automation/Scripting – for system admins or DevOps learners Design and develop curriculum, lesson plans, coding assignments, and real-world projects. Evaluate learners through quizzes, coding exercises, and project submissions. Provide clear and constructive feedback and mentoring to help students improve. Stay updated with the latest developments in Python and related technologies. Support learners in job interview preparation, coding challenges, and project presentations. Collaborate with other trainers and curriculum designers for continuous course enhancement. Requirements: Bachelor’s degree in Computer Science, IT, or a related field. 2+ years of hands-on experience in Python programming and/or technical training. Strong understanding of Python fundamentals and problem-solving skills. Familiarity with at least one domain where Python is widely used (e.g., web, data science, automation). Excellent verbal and written communication skills. Ability to teach complex concepts in a clear, simple, and engaging manner. Preferred Qualifications: Experience with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure). Python certifications (e.g., PCEP, PCAP) are a plus. Experience in mentoring, teaching, or content creation for Python courses. Familiarity with LMS platforms and online teaching tools. Job Types: Part-time, Freelance Contract length: 56 months Pay: ₹8,086.00 - ₹39,610.27 per month Expected hours: 18 per week Language: English (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

1.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Scope Core responsibilities to include the development and demonstration of applications with BY suite of products. In-depth knowledge on demand forecast generation, analysis and review of results. Identification and showcase of various business use cases and results in simple terms. Our Current Technical Environment Software: Python, PL/SQL, MS Excel, MS PowerPoint, Visualization Tools like Sigma / Power BI What You’ll Do You will be working in solving data science problems in the Retail Domain. Some of the interesting problems we are solving are - Forecasting demand by finetuning various ML based and statistical models, finding price sensitivity and other causals to be used effectively in modelling, order generation at stores and distribution centers, etc. We also collaborate with other teams who work in the space of supply chain. Upon getting data from customers, understanding them through analysis, cleaning and transforming them as needed, creating features, KPI analysis and delivering projects with business use cases and stories. Python is what we mostly use for all afore-mentioned activities. Visualization through python or tools like sigma / power BI helps What We Are Looking For BE / BTech or above with 1 to 4 years of experience in data analysis and basics of modelling and timeseries forecasting Must be proficient in Python - understand and use pandas, NumPy, scikit-learn or other scientific libraries effectively and efficiently. Expert in data analysis, exploring data and presenting it in a meaningful way. Must have basic knowledge of modelling to generate timeseries forecast and how to change various parameters and hyper-parameters efficiently Strong knowledge of one database – oracle or sequel server and SQL Some knowledge or experience in the core principles and methodologies of supply chain management will be added advantage. Knowledge of Visualization tools such as Power BI / python packages will be added advantage Good understanding of basics of Statistics & Probability. Strong analytical and problem-solving skills. Passion to learn new tools, languages and frameworks. Excellent verbal and written communication skills. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Good understanding of Python. Experience with at least one Python web framework (e.g., Django, Flask, FastAPI). Good exposure on python scientific libraries ( Numpy, Pandas, Tensorflow). Strong knowledge on data structures and designing for performance, scalability and availability Knowledge in MongoDB and Web services. Experience in Microservices, Big data technologies will be a plus. Good grasp of algorithms, memory management and multithreaded programming. Good to have - Mysql, Redis, ElasticSearch. Able to fit in well within an informal startup environment and to provide hands-on management. High energy level and untiring commitment to drive oneself & the team towards goals. Basic understanding HTML, CSS, JavaScript, JQuery, JS Libraries. Implementing SOAP based and RESTful services. UNIX/LINUX experience is an added advantage. Should have experience in Database & SQL. Good understanding of server-side templating languages such as Jinja 2, Mako, etc. Strong unit test and debugging skills. Proficient understanding of code versioning tools (such as Git, Mercurial or SVN). Write reusable, testable, and efficient python code. Design and implement of low-latency, high-availability, and performant applications. Integration of user-facing elements developed by front-end developers with server side logic. Implementation of security and data protection. Integration of data storage solutions. Performance tuning, improvement, balancing, usability, automation. Work collaboratively with design team to understand end user requirements to provide technical solutions and for the implementation of new software features. Show more Show less

Posted 2 weeks ago

Apply

1.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Scope Core responsibilities to include the development and demonstration of applications with BY suite of products. In-depth knowledge on demand forecast generation, analysis and review of results. Identification and showcase of various business use cases and results in simple terms. Our Current Technical Environment Software: Python, PL/SQL, MS Excel, MS PowerPoint, Visualization Tools like Sigma / Power BI What You’ll Do You will be working in solving data science problems in the Retail Domain. Some of the interesting problems we are solving are - Forecasting demand by finetuning various ML based and statistical models, finding price sensitivity and other causals to be used effectively in modelling, order generation at stores and distribution centers, etc. We also collaborate with other teams who work in the space of supply chain. Upon getting data from customers, understanding them through analysis, cleaning and transforming them as needed, creating features, KPI analysis and delivering projects with business use cases and stories. Python is what we mostly use for all afore-mentioned activities. Visualization through python or tools like sigma / power BI helps What We Are Looking For BE / BTech or above with 1 to 4 years of experience in data analysis and basics of modelling and timeseries forecasting Must be proficient in Python - understand and use pandas, NumPy, scikit-learn or other scientific libraries effectively and efficiently. Expert in data analysis, exploring data and presenting it in a meaningful way. Must have basic knowledge of modelling to generate timeseries forecast and how to change various parameters and hyper-parameters efficiently Strong knowledge of one database – oracle or sequel server and SQL Some knowledge or experience in the core principles and methodologies of supply chain management will be added advantage. Knowledge of Visualization tools such as Power BI / python packages will be added advantage Good understanding of basics of Statistics & Probability. Strong analytical and problem-solving skills. Passion to learn new tools, languages and frameworks. Excellent verbal and written communication skills. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Company Description icogz is an AI-powered Business Intelligence platform that transforms enterprise data into proactive, contextual insights. At its core is Aryabot , a neuro-symbolic, agentic intelligence engine that combines fine-tuned LLMs, Retrieval-Augmented Generation (RAG), and proprietary reasoning to deliver explainable, actionable recommendations across the organization. Role: Data Scientist Location: Mumbai (on-site) Type: Full-time Requirements: 7-12 yrs in data science, analytics or BI consulting with substantial experience in modeling Must-have hands-on experience in: Machine Learning: classical ML algorithms and their use in solving Business problems NLP: text processing, sentiment analysis , topic modeling Deep Learning: Work on setting up and tuning deep neural networks LLMs: Experience with working with LLMs, structured response outputs, agentic systems Python/R (pandas, NumPy, scikit-learn) & SQL fluency Data-visualization & storytelling skills Proven ability to frame complex business problems as data-science/ML solutions Strong data-cleansing, QC and algorithm-validation skills Excellent written & verbal English; self-starter attitude in a fast-paced startup What You’ll Do Analyze client datasets and design ML/NLP pipelines to solve business problems Build, validate and deploy analytics products within our ecosystem Own data QC, algorithm checks and lead User Acceptance Testing Collaborate with Product, Engineering & Client Solutions to convert insights into dashboards and playbooks Mentor junior analysts via workshops on ML, data-storytelling and emerging AI trends Benefits: Competitive compensation Learning & development ESOPs: Employees will be eligible for equity through our (ESOP), allowing them to share in the company’s long-term growth and success Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world’s financial system we touch nearly 20% of the world’s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what is all about. We’re seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team. This role is located in Pune, MH or Chennai, TN (Hybrid). In this role, you’ll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (“DART”) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation. The team works closely with various groups across BNY to support the firm’s Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test. The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting. For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework. Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency. To be successful in this role, we’re seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask. Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc.), clustering (k-means, etc.), and time series analysis (ARIMA, EWMA, GARCH, etc.) is a plus. Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About BNP Paribas Group BNP Paribas is a top-ranking bank in Europe with an international profile. It operates in 71 countries and has almost 199 000 employees . The Group ranks highly in its three core areas of activity: Domestic Markets and International Financial Services (whose retail banking networks and financial services are grouped together under Retail Banking & Services) and Corporate & Institutional Banking, centred on corporate and institutional clients. The Group helps all of its clients (retail, associations, businesses, SMEs, large corporates and institutional) to implement their projects by providing them with services in financing, investment, savings and protection. In its Corporate & Institutional Banking and International Financial Services activities, BNP Paribas enjoys leading positions in Europe, a strong presence in the Americas and has a solid and fast-growing network in the Asia/Pacific region. About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, a leading bank in Europe with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 6000 employees, to provide support and develop best-in-class solutions. About Business Line/Function GM Data & AI Lab leverages the power of Machine Learning and Deep learning to drive innovation in various business lines. Our primary goal is to harness the potential of vast amounts of structured and unstructured data to improve our services and provide value. Today we are a team of around 40+ Data Scientists based in Paris, London, Frankfurt, Lisbon, New York, Singapore and Mumbai. Job Title Data Scientist (Tabular & Text) Date Department: Front Office Support Location: Mumbai Business Line / Function Global Markets – Data & AI Lab Reports To (Direct) Grade (if applicable) (Functional) Number Of Direct Reports NA Directorship / Registration NA Position Purpose Your work will span across multiple areas, including predictive modelling, automation and process optimization. We use AI to discover patterns, classify information, and predict likelihoods. Our team works on building, refining, testing, and deploying these models to support various business use cases, ultimately driving business value and innovation. As a Data Scientist on our team, you can expect to work on challenging projects, collaborate with stakeholders to identify business problems, and have the opportunity to learn and grow with our team. A typical day may involve working on model development, meeting with stakeholders to discuss project requirements/updates, and brainstorming/debugging with colleagues on various technical aspects. At the Lab, we're passionate about staying at the forefront of AI research, bridging the gap between research & industry to drive innovation and to make a real impact on our businesses. Responsibilities Develop and maintain AI models from inception to deployment, including data collection, analysis, feature engineering, model development, evaluation, and monitoring. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. Working with expert colleagues and business representatives to examine the results and keep models grounded in reality. Documenting each step of the development and informing decision makers by presenting them options and results. Ensure the integrity and security of data. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Technical & Behavioral Competencies Qualifications: Bachelors / Master / PhD degree in Computer Science / Data Science / Mathematics / Statistics / relevant STEM field. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. Proven track record of building AI model from scratch or finetuning on large models for Tabular or/and Textual data. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, transformers, langchain). Ability to write clear and concise code in python. Intellectually curious and willing to learn challenging concepts daily. Knowledge of current Machine Learning/Artificial Intelligence literature. Skills Referential Behavioural Skills: Ability to collaborate / Teamwork Critical thinking Communication skills - oral & written Attention to detail / rigor Transversal Skills Analytical Ability Education Level Bachelor Degree or equivalent Experience Level At least 1 year Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 7 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Python Full Stack Developer Location: Velachery, Chennai (Global Healthcare Billing Partners Pvt. Ltd.) About Role: We are currently looking for a talented and motivated Python Developer with 2+ years of experience to join our dynamic team. The ideal candidate will have strong skills in Python development and a passion for working on innovative projects in the healthcare billing industry. Role and Responsibilities: Develop and maintain Python-based applications and solutions for various projects, with a focus on automation, data manipulation, and web development. Design and implement scalable and efficient software solutions. Work on data integration, automation scripts, and API development. Contribute to machine learning models and analytics to enhance decision-making processes. Collaborate with cross-functional teams to understand project requirements and provide technical solutions. Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, IT, or a related field. 2+ years of hands-on experience in Python development. Strong understanding and experience in web development using Python frameworks (Django, Flask, etc.). Experience in automation tools and scripting for data extraction, transformation, and loading (ETL). Proficiency in data manipulation and working with databases (SQL, NoSQL). Knowledge of machine learning algorithms and libraries (e.g., scikit-learn, TensorFlow, etc.). Good understanding of software engineering principles and design patterns. Interested candidates can share your Resume/CV to this WhatsApp Number 8925808592 Regards, Harini S HR Department

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description Key Responsibilities: Empower the MS Fabric Self-Service Analytics community by establishing standards, best practices, governance, and educational initiatives. Stay up to date with the evolving MS Fabric platform by proactively exploring new features and capabilities, and regularly share updates through enablement sessions and internal communications. Design, develop, and implement comprehensive data solutions utilizing the full suite of MS Fabric artifacts, including Lakehouses, Warehouses, Pipelines, Dataflows Gen2, Notebooks, and ML Models. Provide coaching and mentorship in both MS Fabric and Power BI. Conduct formal training sessions to support user skill development and platform adoption. Act as a subject matter expert in MS Fabric, offering architectural guidance, ingestion strategies, and performance optimization across all artifacts. Lead and support the optimization of capacity usage within MS Fabric and Power BI environments to ensure efficient and scalable analytics workloads. Present and demonstrate solutions at internal user group meetings and broader data community events, serving as an advocate for modern analytics practices. Collaborate with business and technical stakeholders to define data requirements and design end-to-end analytics workflows within the MS Fabric platform. Support advanced analytics and data science projects by leveraging the integrated ML and AI capabilities of MS Fabric. Maintain and continuously enhance automated, curated learning paths for both Power BI and MS Fabric to facilitate onboarding, upskilling, and role-based training within the Self-Service community. Minimum Qualifications: Excellent verbal and written communication skills in English. Bachelor's degree in a related field is required; an advanced degree is a plus. A minimum of 3 years of hands-on experience with Power BI, including report development, data modeling, and DAX. At least 2 years of experience working with MS Fabric artifacts such as Lakehouses, Warehouses, Pipelines, and Notebooks. Current Microsoft certifications relevant to MS Fabric (DP-700) and Power BI (PL-300). Proficiency in data transformation and ingestion using Pipelines, Dataflows Gen2, and integration with external data sources. Strong experience in developing and executing notebooks in MS Fabric using Python and SQL. Extensive hands-on experience with Python (including pandas, NumPy, scikit-learn) and SQL, managing complete data workflows from ingestion to modeling and machine learning. Knowledge and practical application of ML and AI capabilities within MS Fabric for delivering predictive analytics and intelligent data products. Proven expertise in optimizing capacity and performance across analytics platforms, particularly within MS Fabric and Power BI. Comfortable delivering presentations to large, diverse audiences in both virtual and in-person formats. Experience in building, guiding, and governing a scalable self-service analytics program with a strong focus on enablement. Demonstrated ability to engage effectively with stakeholders across business and technical team. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less

Posted 2 weeks ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description The Automation Tester will be responsible for design and develop test Frameworks and automate tests using Cypress. Develop automated tests to test Web Services and REST APIs. Work closely with Scrum team members to clarify requirements, ensure testability and ability to automate, to provide feedback on design, both functional and technical. Innovate on latest tools and processes to improve QA functional manual and Automation testing, document best practices and mentor junior team members. Identify test cases from Acceptance Criteria for the User Stories, estimate work and participate in design reviews. Work on frameworks to ensure continuous deployment and continuous integration. Develop new proofs-of-concepts for QA Automation, ensuring continual improvements. Qualifications and Skills: 7+ years of overall experience in QA, with a focus on Test Automation for at least 4 years. Strong expertise in Cypress for automating complex web applications. Experience in using Cypress with Cucumber for Behavior-Driven Development (BDD) Proficiency in Python, SQL, NumPy, and Pandas for comprehensive data validations. Experience in BDD with Cucumber. Proficient in testing REST APIs using Cypress. Experience with JavaScript, especially in the context of using Cypress for automation. Proficient in Git for version control. Working knowledge of CI, CD & CT implementation using GitHub Actions. Skills in test integrations, defect management, and test management using Azure DevOps/VSTS. Excellent verbal and written communication skills. Self-driven, results-oriented, motivated, and a collaborative team player.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description At EVERSANA, we are proud to be certified as a Great Place to Work across the globe. We’re fueled by our vision to create a healthier world. How? Our global team of more than 7,000 employees is committed to creating and delivering next-generation commercialization services to the life sciences industry. We are grounded in our cultural beliefs and serve more than 650 clients ranging from innovative biotech start-ups to established pharmaceutical companies. Our products, services and solutions help bring innovative therapies to market and support the patients who depend on them. Our jobs, skills and talents are unique, but together we make an impact every day. Join us! Across our growing organization, we embrace diversity in backgrounds and experiences. Improving patient lives around the world is a priority, and we need people from all backgrounds and swaths of life to help build the future of the healthcare and the life sciences industry. We believe our people make all the difference in cultivating an inclusive culture that embraces our cultural beliefs. We are deliberate and self-reflective about the kind of team and culture we are building. We look for team members that are not only strong in their own aptitudes but also who care deeply about EVERSANA, our people, clients and most importantly, the patients we serve. We are EVERSANA. Job Description Introduction: We are looking for an experienced professional to lead patient analytics projects, working closely with pharmaceutical clients to deliver robust analytical solutions. The ideal candidate should have strong technical expertise, hands-on experience with patient-level data, and the ability to engage directly with clients while guiding a team. Key Responsibilities Act as a subject matter expert to determine scientific and methodological aspects of RWE projects. Independently draft and edit high quality research proposals and statistical analysis plans. Guide team members with components of RWE analysis, such as, cohort creation, target/control creation, comorbidity analysis, adverse reactions, line of therapy, cost analysis, provider and payer analysis, KOL segmentation and targeting etc., based on clinical knowledge of disease area. Develop and apply data mining, statistical, machine learning techniques and models to extract analytic insights from healthcare and non-healthcare data sets. Develop software solutions and applications using SQL, Python, and/or R. Responsible for review, generation, and delivery of analytic products in support of project work, RFP responses and other business needs. Support the development of visualizations and presentations for client deliverables. Interface with EVERSANA analysts, data scientists, clinicians, program managers and with EVERSANA customers. Responsible for the development and delivery of projects within defined timelines. Respond to additional ad-hoc analysis requests. All other duties as assigned Qualifications 7-10 years of experience in data analytics and modeling, preferably around healthcare claims data and commercial analytics in the life science industry. Practical knowledge of statistics, statistical inference, and machine learning algorithms (deep learning, LSTM, SVM, CNN, GAN, and reinforcement learning) Hands-on experience and proficiency in SQL, Python, (pandas, scikit-learn, numpy, scipy) R, visualization tools, and software development within a Linux environment. Deep commercial awareness of the healthcare, health technology and pharmaceutical industry, gained through experience. High level of knowledge and analytic experience with real world data sources (e.g., EHR/EMR data, RWD, healthcare prescription / claims databases) as well as comprehensive understanding of the RWD supply chain and reimbursement policies Expertise in healthcare industry terminology and coding (e.g. NDC11, ICD9/ ICD10, HCPCS, CPT, LOINC) Works independently and is a self-starter MS PowerPoint presentation development skills Client interface and presentation skills Ability to lead by example Strong organizational skills, including ability to manage multiple deliverables across multiple projects. Significant consideration will be given to prior publications, independent software projects completed, and non-proprietary code authored by the candidate. Additional Information All your information will be kept confidential according to EEO guidelines. Our team is aware of recent fraudulent job offers in the market, misrepresenting EVERSANA. Recruitment fraud is a sophisticated scam commonly perpetrated through online services using fake websites, unsolicited e-mails, or even text messages claiming to be a legitimate company. Some of these scams request personal information and even payment for training or job application fees. Please know EVERSANA would never require personal information nor payment of any kind during the employment process. We respect the personal rights of all candidates looking to explore careers at EVERSANA. From EVERSANA’s inception, Diversity, Equity & Inclusion have always been key to our success. We are an Equal Opportunity Employer, and our employees are people with different strengths, experiences, and backgrounds who share a passion for improving the lives of patients and leading innovation within the healthcare industry. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion, and many other parts of one’s identity. All of our employees’ points of view are key to our success, and inclusion is everyone's responsibility. Follow us on LinkedIn | Twitter Show more Show less

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Shivajinagar, Pune, Maharashtra

On-site

Indeed logo

We're Hiring | Junior Backend Developer (Python) Location: Pune (On-site) Experience: Minimum 2 years (Mandatory) Availability: Immediate Joiners Only Salary: Up to ₹12 LPA We are looking for a skilled Junior Backend Developer who can contribute from Day 1 and thrive in a fast-paced environment. Mandatory Requirements: Minimum 2 years of professional experience in Python development Currently residing in Pune Immediate joining availability Hands-on experience with: Caching strategies (Redis, Memcached) Databases (PostgreSQL, AWS) Data analysis tools (Pandas, NumPy) Basic understanding of ETL pipelines and AI/ML concepts Strong communication skills to collaborate with both technical and non-technical teams Note: Applications will be considered only from candidates who meet all mandatory criteria . Key Responsibilities: Write clean, production-ready Python code Build and maintain high-performance software systems Collaborate with team members for testing, reviews, and performance tuning Contribute to continuous team growth through feedback and goal setting To Apply: Send your CV to: anjali.chauhan@candyseedstech.com Contact: 7498 306 742 Let’s build impactful technology together. Apply now if you match the above criteria. Job Type: Full-time Pay: ₹800,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Joining bonus Overtime pay Performance bonus Yearly bonus Ability to commute/relocate: Shivajinagar, Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 3 years (Preferred) Location: Shivajinagar, Pune, Maharashtra (Preferred) Work Location: In person Application Deadline: 07/06/2025 Expected Start Date: 08/06/2025

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description Job Title: AI Engineer Salary: 4 - 5.4 LPA Experience: Minimum 2 years Location: Hinjewadi, Pune Work Mode: Work from Office Availability: Immediate Joiner About Us: Rasta.AI, a product of AI Unika Technologies (P) Ltd, is a pioneering technology company based in Pune. We specialize in road infrastructure monitoring and maintenance using cutting-edge AI, computer vision, and 360-degree imaging. Our platform delivers real-time insights into road conditions to improve safety, efficiency, and sustainability. We collaborate with government agencies, private enterprises, and citizens to enhance road management through innovative tools and solutions. The Role This is a full-time, on-site role. As an AI Engineer, you will be responsible for developing innovative AI models and software solutions to address real-world challenges. You will collaborate with cross-functional teams to identify business opportunities and provide customized solutions. You will also work alongside talented engineers, designers, and data scientists to implement and maintain these models and solutions. Technical Skills Programming Languages: Python (and other AI-supported languages) Databases: SQL, Cassandra, MongoDB Python Libraries: NumPy, Pandas, Scikit-learn Deep Neural Networks: CNN, RNN, and LLM Data Analysis Libraries: TensorFlow, Pandas, NumPy, Scikit-learn, Matplotlib, Tensor Board Frameworks: Django, Flask, Pyramid, and Cherrypie Operating Systems: Ubuntu, Windows Tools: Jupyter Notebook, PyCharm IDE, Excel, Roboflow Big Data (Bonus): Hadoop (Hive, Sqoop, Flume), Kafka, Spark Code Repository Tools: Git, GitHub DevOps-AWS: Docker, Kubernetes, Instance hosting and management Analytical Skills Exploratory Data Analysis Predictive Modeling Text Mining Natural Language Processing Machine Learning Image Processing Object Detection Instance Segmentation Deep Learning DevOps AWS Knowledge Expertise Proficiency in TensorFlow library with RNN and CNN Familiarity with pre-trained models like VGG-16, ResNet-50, and Mobile Net Knowledge of Spark Core, Spark SQL, Spark Streaming, Cassandra, and Kafka Designing and Architecting Hadoop Applications Experience with chat-bot platforms (a bonus) Responsibilities The entire lifecycle of model development: Data Collection and Preprocessing Model Development Model Training Model Testing Model Validation Deployment and Maintenance Collaboration and Communication Qualifications Bachelor's or Master's degree in a relevant field (AI, Data Science, Computer Science, etc.) Minimum 2 years of experience developing and deploying AI-based software products Strong programming skills in Python (and potentially C++ or Java) Experience with machine learning libraries (TensorFlow, PyTorch, Kera's, scikit-learn) Experience with computer vision, natural language processing, or recommendation systems Experience with cloud computing platforms (Google Cloud, AWS) Problem-solving skills Excellent communication and presentation skills Experience with data infrastructure and tools (SQL, NoSQL, and big data platforms) Teamwork skills Join Us! If you are passionate about AI and want to contribute to groundbreaking projects in a dynamic startup environment, we encourage you to apply! Be part of our mission to drive technological advancement in India. Drop Your CV - hr@aiunika.com Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary: We are looking for a Senior Data Engineer with deep expertise in Python , Apache Spark , and Apache Airflow to design, build, and optimize scalable data pipelines and processing frameworks. You will play a key role in managing large-scale data workflows, ensuring data quality, performance, and timely delivery for analytics and machine learning platforms. Key Responsibilities: Design, develop, and maintain data pipelines using Apache Spark (PySpark) and Airflow for batch and near real-time processing. Write efficient, modular, and reusable Python code for ETL jobs, data validation, and transformation tasks. Implement robust data orchestration workflows using Apache Airflow (DAGs, sensors, hooks, etc.). Work with big data technologies on distributed platforms (e.g., Hadoop, AWS EMR, Databricks). Ensure data integrity, security, and governance across various stages of the pipeline. Monitor and optimize pipeline performance; resolve bottlenecks and failures proactively. Collaborate with data scientists, analysts, and other engineers to support data needs. Document architecture, processes, and code to support maintainability and scalability. Participate in code reviews, architecture discussions, and production deployments. Mentor junior engineers and provide guidance on best practices. Required Skills: 8+ years of experience in data engineering or backend development roles. Strong proficiency in Python , including data manipulation (Pandas, NumPy) and writing scalable code. Hands-on experience with Apache Spark (preferably PySpark) for large-scale data processing. Extensive experience with Apache Airflow for workflow orchestration and scheduling. Deep understanding of ETL/ELT patterns , data quality, lineage, and data modeling. Familiarity with cloud platforms (AWS, GCP, or Azure) and related services (S3, BigQuery, Redshift, etc.). Solid experience with SQL , NoSQL, and file formats like Parquet, ORC, and Avro. Proficient with CI/CD pipelines , Git, Docker, and Linux-based development environments. Preferred Qualifications: Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Exposure to real-time streaming technologies (e.g., Kafka, Flink, Spark Streaming). Background in machine learning pipelines and MLOps tools (optional). Knowledge of data governance frameworks and compliance standards. Soft Skills: Strong problem-solving and communication skills. Ability to work independently and lead complex projects. Experience working in agile and cross-functional teams. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Role: Senior Manager - Data Scientist Position overview Work in an innovative and fast paced AI application development team to conceptualize and execute projects to leverage the power of AI/ML and analytics. The work will be related to producing business outcomes for Cloud and Cybersecurity products and services of Novamesh, a wholly owned subsidiary of Tata Communications Ltd. Success in this role requires a mix of data science skills, appreciation of the business, and ability to work across teams. A special focus area of this role would be to identify and execute ideas for creating monetizable product differentiators by working with Domain Experts from individual product teams and acquire domain skills in the process. Detailed job description Develop, Test, and Deploy ML/AI models for various products 10 to 12 Years of industry experience with demonstratable outcomes in field of Data Science Perform data preprocessing, feature engineering and ML/DL model evaluation Optimize and fine-tune models for performance and scalability Good understanding of Statistical, ML, AI models Good understanding of NLP concepts and projects involving entity recognition, text classification, and language modelling like GPT Build and refine RAG models to improve information retrieval and answer generation. Integrate RAG methods into existing applications to enhance data accessibility and user experience. Work closely with cross-functional teams including software engineers, product managers, and domain experts. Document processes, methodologies, and model development for internal and external stakeholders. Go-getter attitude and will to “Make it happen" Qualification and Skills Bachelor or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field from reputed institutions. Strong knowledge of probability and statistics. Expertise in machine learning and deep learning skills Hand on experience with GenAI, LLMs and SLMs Strong programming skills Python, PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex , OpenAI SQL, flat file DB, Datalakes, data stores, data frames (Pandas, Cudf etc.) Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. Good knowledge of Data Engineering and working knowledge of the same Excellent problem-solving skills and a proactive attitude. Excellent communication skills and teamwork abilities. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description for Data Science facilitator Position Overview: We are searching for a highly skilled and proficient Data Science Facilitator for online upskilling courses in the field of data science. The facilitator will play a pivotal role in delivering comprehensive modules on cutting-edge data science topics, including machine learning, artificial intelligence, deep learning, web development, and more. The ideal candidate should possess a strong grasp of data science principles, technologies, and practical applications, coupled with exceptional communication and instructional abilities. Staying updated with the latest advancements in data science and effectively communicating these concepts to course participants is a fundamental aspect of this role. Responsibilities: Curriculum Development and Delivery: Develop and present engaging training modules that cover advanced data science topics, catering to learners' varying levels of proficiency. Deliver insightful content on subjects such as machine learning algorithms, artificial intelligence frameworks, deep learning architectures, and web development techniques. Technical skills: Programming Language: Python AI & Data analysis Packages/Framework: Numpy, Pandas, Seaborn, Sklearn, TensorFlow & Keras Knowledge of Natural Language Processing /Computer Vision will add an advantage. Customization of Learning Material: Tailor existing course materials or create new content to align with the specific learning requirements and skill levels of diverse participants, spanning from novice learners to experienced data science professionals. Stay Abreast of Industry Trends: Continuously research and monitor the dynamic landscape of data science, including emerging technologies, methodologies, and industry best practices, to ensure that training content remains relevant and up to date. Individualized Mentorship: Provide personalized guidance and support to participants, addressing their inquiries and assisting them in overcoming challenges encountered while mastering data science methodologies. Qualification- 1. M Sc (Computer Science) or 2. MCA (Master in Computer Application) or 3. B Tech or M Tech in Computer Engineering or IT Technical Skills- 1. Programming Language: Python 2. Database: MySQL/Oracle/SQL Server/PostgreSQL SQL (any one) 3. Data Science: Numpy, Pandas, Matplotlib, Seaborn, EDA 4. Machine Learning: Sklearn, ML models for regression, classification, and clustering problems 5. Additional knowledge of the following will be an advantage: Tableau/Power BI Work Experience- 1. Minimum 5 years of teaching experience in relevant domain, however it can be lowered to 3 years for an exceptional candidate. Proven expertise and hands-on experience in data science, particularly in advanced areas like machine learning, artificial intelligence, deep learning, and web development. Exceptional presentation and facilitation skills, with the ability to engage and inspire learners in an online environment. Outstanding verbal and written communication skills, enabling the clear and concise explanation of intricate concepts to diverse audiences. In-depth comprehension of data science principles, tools, techniques, and industry standards. Capability to adapt instructional content based on participants' varying levels of familiarity with data science. Experience in online teaching, curriculum design, and the application of interactive learning tools will be advantageous. Relevant certifications (e.g., Certified Data Scientist, Google TensorFlow Developer, AWS Machine Learning Specialty) would be a plus. Effective problem-solving skills and the ability to address participants' inquiries and obstacles in a constructive manner. High level of professionalism, commitment to maintaining confidentiality, and adherence to ethical standards. Autonomous and collaborative work ethic, proficiently managing time and tasks. Keenness to continually enhance personal knowledge and skills within the evolving realm of data science. You can also email sadafa@regenesys.net Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Greetings, We have immediate opportunity for Python Developer – 5-7+ years Synechron– Mumbai Job Role: Python Developer Job Location: Mumbai About Synechron We began life in 2001 as a small, self-funded team of technology specialists. Since then, we’ve grown our organization to 14,500+ people, across 58 offices, in 21 countries, in key global markets. Innovative tech solutions for business We're now a leading global digital consulting firm, providing innovative technology solutions for business. As a trusted partner, we're always at the forefront of change as we lead digital optimization and modernization journeys for our clients. Customized end-to-end solutions Our expertise in AI, Consulting, Data, Digital, Cloud & DevOps and Software Engineering, delivers customized, end-to-end solutions that drive business value and growth. Job Title: Python Developer Location: Mumbai] Job Type: [Full-Time/Part-Time/Contract] Experience Range: 5-7+ years Company Overview: [Insert Company Name] is a leading [insert industry] company dedicated to [briefly describe company mission or goals]. We are looking for a skilled Python Developer to join our dynamic team and contribute to our innovative projects. Job Summary: As a Python Developer, you will be responsible for designing, developing, and deploying scalable software applications. You will work closely with cross-functional teams to gather requirements and implement solutions that enhance our product offerings. A strong understanding of data manipulation and analysis using libraries such as Numpy and Pandas is essential. Familiarity with RAG (Red, Amber, Green) reporting is a plus. Key Responsibilities: Design, develop, and maintain high-quality software applications using Python. Utilize Numpy and Pandas for data manipulation, analysis, and visualization. Collaborate with data scientists and analysts to implement robust data pipelines. Write efficient, reusable, and reliable code while adhering to best practices. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and debug applications to optimize performance. Contribute to the design and architecture of new features and systems. Participate in agile development processes and sprint planning. Stay up-to-date with the latest industry trends and technologies. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 5-7+ years of experience in software development, specifically with Python. Strong proficiency in data manipulation and analysis using Numpy and Pandas. Experience in developing RESTful APIs and integrating with third-party services. Knowledge of version control systems (e.g., Git). Familiarity with databases (SQL and NoSQL). Good understanding of software development methodologies (Agile/Scrum). Excellent problem-solving skills and ability to work independently or in a team environment. Preferred Skills: Familiarity with RAG (Red, Amber, Green) reporting and its implementation in software applications. Experience with cloud services (AWS, Azure, Google Cloud) is a plus. Knowledge of front-end technologies (HTML, CSS, JavaScript) is a plus. Experience with machine learning libraries (e.g., Scikit-learn, TensorFlow) would be an advantage. What We Offer: Competitive salary and benefits package. Opportunities for professional development and career growth. A collaborative and inclusive work environment. [Insert any other unique benefits or perks your company offers.] Application Process: Interested candidates are invited to submit their resume and a cover letter outlining their qualifications and experience to [Insert Application Email/Link]. Feel free to customize the JD further to align with your company’s specific needs and culture! For more information on the company, please visit our website or LinkedIn community. If you find this this opportunity interesting kindly share your updated profile on Mandar.Jadhav@synechron.com With below details (Mandatory) Total Experience Experience as Data Engineer- Experience in Spark- Experience in Hive- Experience in SQL Current CTC- Expected CTC- Notice period- Current Location- Ready to relocate to Mumbai- If you had gone through any interviews in Synechron before? If Yes when Regards, Mandar Jadhav Mandar.Jadhav@synechron.com Show more Show less

Posted 2 weeks ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies