Home
Jobs

919 Pandas Jobs - Page 26

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 9 years

0 - 3 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Everyone, Optum is hiring Data Engineer at Bangalore with experience on Python, Numpy, Pandas, MS SQL. Experience: 5+ yrs Notice Period: Immediate to 30 days Location: Bangalore NOTE: Diversity is preferable Primary Skills: Python, Pandas, Numpy, MS SQL Required Qualifications: Cloud Solutions: Experience with Platform-as-a-Service (PaaS) and cloud solutions, particularly those focused on data stores and associated ecosystems Data Provisioning: Experience in data provisioning and ensuring data quality Big Data Development: Experience with big data technologies and frameworks Experience with SQL Server: Proficiency in SQL Server is essential for managing and querying databases effectively Data Security and Administration: Knowledge of data security practices and database administration Data Warehousing: Knowledge of data warehousing concepts and best practices Solid knowledge of ETL Development using Python: This is crucial for designing, implementing, and maintaining efficient ETL processes Database Architecture and Design: Understanding of database architecture, design, and optimization Scripting Languages: Proficiency in other scripting languages like SQL or other data markup scripting Data Modeling: Proven ability to design and implement data models that support business requirements Interested can share their updated CV at mohammad_ayub@optum.com or can reach me on 8008600298. Regards, Arshad Ayub Mohammad mohammad_ayub@optum.com 8008600298

Posted 2 months ago

Apply

1 - 3 years

2 - 4 Lacs

Coimbatore

Work from Office

Naukri logo

Dear Candidate, We are looking for an immediate joiner for the position of Junior Python Developer . Overview: We're looking for an immediate joiner with experience in Python, SQL, Power BI, Django, and data analysis tools like NumPy and Pandas. Youll develop web apps with Django, analyze data with SQL, and create Power BI reports and dashboards. Responsibilities: Develop web applications using Django and Python. Build APIs with Django Rest Framework (DRF). Write SQL queries for data extraction and analysis. Use NumPy and Pandas for data manipulation. Create and maintain Power BI dashboards. Automate tasks with Python and assist in ETL processes. Troubleshoot and collaborate on technical solutions. Preferred Skills: Experience with Python, Django, SQL, Power BI, NumPy, and Pandas. Familiarity with DRF and relational databases (SQL Server, MySQL). Strong problem-solving skills and attention to detail. Bachelors degree in Computer Science or related field. 1-2 years of relevant experience. Desired Skills: Knowledge of front-end technologies (HTML, CSS, JavaScript). Experience with cloud platforms (Azure) or other BI tools (Tableau, Qlik). Immediate Joiners Preferred! Interested? Reach out to 9363591205 or kavyad@parkcorporates.com

Posted 2 months ago

Apply

2 - 4 years

7 - 12 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Overview: Lead the development of AI and ML solutions to enhance automation capabilitiesand create intelligent processing systems for business operations. Roles and Responsibilities: AI/ML Model Development: Discoverand train relevant AI/ML models. Python-Based Solutions: CreatePython-based platforms, applications, and services integrating the AI/ML modelsand integrating with existing systems/RPA bots through APIs. NLP Solutions: Develop NLP solutionsfor document processing, reconciliations, analysis, etc. ML Model Maintenance: Implement andmaintain ML models in production. Advanced Technologies: ImplementRAG, Vector search, use of LangChain/AutoGen. Must Haves: 2+ years of Python development experience. Strong background in machine learning and AItechnologies. Knowledge of deep learning frameworks(TensorFlow, PyTorch). Proficiency in NLP and computer vision. Knowledge of RESTful APIs and microservicesarchitecture. Knowledge of frameworks such as LangChain,AutoGen. Experience with cloud platforms (AWS/Azure/GCP). Qualifications: Bachelor's or Master's degree in Computer Science, AI, or related field.

Posted 2 months ago

Apply

2 - 4 years

3 - 7 Lacs

Jaipur

Work from Office

Naukri logo

Position Overview: We are seeking a motivated and experienced Web Scraping Developer with 2+ years of hands-on experience to join our team. The ideal candidate will possess strong technical skills in web scraping, data processing, and performance optimization, with expertise in Python, popular scraping frameworks, and large dataset management. You will play a critical role in extracting and processing data from a variety of websites while ensuring compliance with legal and ethical guidelines. Key Responsibilities: Develop and maintain scalable and optimized Python-based web scraping scripts. Use web scraping frameworks (Scrapy, Beautiful Soup, Selenium, etc.) to extract data from static and dynamic websites. Implement solutions for handling dynamic content using headless browsers like Playwright or Puppeteer. Extract and process data efficiently from complex HTML, CSS, JavaScript, and XPath structures. Work with large datasets, using tools such as Pandas for data manipulation, cleaning, and processing. Ensure proper handling of data export in formats like CSV, JSON, and direct database integration. Consume and integrate REST APIs, manage API rate limits, and handle HTTP protocols, cookies, and sessions. Manage data storage using relational databases like MySQL, optimising queries and indexing for large datasets. Troubleshoot and bypass common anti-scraping techniques such as CAPTCHAs, IP blocking, and user-agent tracking. Use tools and techniques like rotating proxies, headless browsers, and CAPTCHA-solving services to mitigate blocking. Collaborate with teams using version control systems like Git, perform code reviews, and contribute to collaborative workflows. Optimize scraping scripts for performance, including parallel processing or distributed scraping with tools like Celery, Redis, or AWS Lambda. Deploy scraping solutions using tools like Docker, AWS, or Google Cloud, and automate scraping tasks with schedulers (e.g., Cron). Implement robust error-handling mechanisms and monitor scraping jobs with logging frameworks. Stay updated with web scraping trends and ensure that projects comply with web scraping ethics, copyright laws, and website terms of service. Required Qualifications: 2+ years of hands-on experience in web scraping using Python. Strong proficiency in Scrapy, Beautiful Soup, Selenium, or similar scraping frameworks. Experience with headless browsers like Playwright or Puppeteer for handling complex websites. In-depth understanding of HTML, CSS, XPath, and JavaScript for dynamic content interaction. Proficiency in data handling with Pandas and experience in exporting data in multiple formats (CSV, JSON, databases). Strong knowledge of REST APIs and working with web protocols like HTTP, cookies, and session management. Experience in managing MySQL databases, query optimization, and indexing for large-scale data. Familiarity with anti-scraping techniques and proficiency in bypassing measures like CAPTCHAs and IP blocks. Experience with version control tools like Git and familiarity with collaborative workflows and code review processes. Hands-on experience in performance optimization, parallel processing, and distributed scraping. Knowledge of deploying scraping solutions using Docker, AWS, or Google Cloud. Strong problem-solving skills, including error-handling and debugging using logging frameworks. Awareness of web scraping ethics, copyright laws, and legal compliance with website terms of service. Preferred Qualifications: Familiarity with cloud-based solutions like AWS Lambda, Google Cloud, or Azure for distributed scraping. Experience with workflow automation tools and Cron jobs. Basic knowledge of frontend development (HTML, CSS, JavaScript) is a plus.

Posted 2 months ago

Apply

8 - 12 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

We are looking for an experienced Python ETL Developer to design, develop, and optimize data pipelines. The ideal candidate should have expertise in Python, PySpark, Airflow, and data processing frameworks, along with the ability to work independently and communicate effectively in English. Roles & Responsibilities Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. Work with large-scale data processing and transformation workflows. Optimize and enhance ETL performance and scalability. Collaborate with data engineers and business teams to ensure efficient data flow. Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. Qualifications & Skills 8+ years of Python experience, with 5+ years dedicated to Python ETL development. Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. Experience in working with SQLAlchemy and FastAPI (added advantage). Strong problem-solving skills and the ability to work independently. Good English communication skills to collaborate with global teams. Preferred Qualifications: Experience in cloud-based ETL solutions (AWS, GCP, Azure). Knowledge of big data technologies like Hadoop, Spark, or Kafka.

Posted 2 months ago

Apply

3 - 7 years

8 - 14 Lacs

Thane

Work from Office

Naukri logo

Role Description: - Python Developer should be for backend scripts + RESTful API development. - ML is not in scope. - The developer should have experience in containerization and orchestration. - Experience on Python libraries like NumPy, Pandas, Django framework - Should have hands-on experience in handling backend scripts

Posted 2 months ago

Apply

3 - 7 years

5 - 9 Lacs

Jamshedpur

Work from Office

Naukri logo

Role Description: - Python Developer should be for backend scripts + RESTful API development. - ML is not in scope. - The developer should have experience in containerization and orchestration. - Experience on Python libraries like NumPy, Pandas, Django framework - Should have hands-on experience in handling backend scripts

Posted 2 months ago

Apply

3 - 8 years

5 - 15 Lacs

Salem, Saudi Arabia

Work from Office

Naukri logo

Certified in at least one of the recent web technologies Solid experience in web development using recent web technologies and frameworks. (3 or more year of experience) Certified in Artificial Intelligence/ Machine Learning 3 or more successful Artificial Intelligence projects within enterprise settings. Solid experience in developing AI applications using Python machine learning libraries such as Numpy, Pandas, Scikit-Learn, PyTorch, NLTK, spaCy, etc., (2 or more years of experience) Advanced knowledge in machine learning, deep learning, natural language processing, generative AI, large language models and retrieval augmented generation

Posted 2 months ago

Apply

3 - 7 years

8 - 14 Lacs

Kanpur

Work from Office

Naukri logo

Role Description: - Python Developer should be for backend scripts + RESTful API development. - ML is not in scope. - The developer should have experience in containerization and orchestration. - Experience on Python libraries like NumPy, Pandas, Django framework - Should have hands-on experience in handling backend scripts

Posted 2 months ago

Apply

2 - 7 years

8 - 14 Lacs

Nagpur

Work from Office

Naukri logo

- Must have 2+years of work experience as Data Scientist - Strong hands on Python programming - Good to have python libraries like Numpy & Pandas - Working experience in Machine Learning and Deep Learning is Must - Should have deployment experience - Experience in Deep learning using Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and LSTM. - Should have done minimum Projects in Deep learning, NLP and having hands on experience in Model Deployment . - Thorough understanding of state - of - the - art DL concepts (Sequence modelling, Attention, Convolution etc.) along with knack to imagine new schema. - Candidate should have knowledge on Front end design is added advantage . - Knowledge on MLOPS, data factory, data bricks and time series forecasting is mandatory .

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy, and Dask. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy, and Dask Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

3 - 6 years

5 - 8 Lacs

Udaipur

Work from Office

Naukri logo

Data Engineer Name:Data Engineer Role:Data EngineerIndustry:Software/ ITLocation:Udaipur (Rajasthan)Job Type:Full TimeExperience:3- 6yearsSkills:Data Engineer,ETL/ELT processes,data ingestion, cloud platforms,big data tools,CI/CD pipelinesSalary:Best in the industryEducation:BTech (CS/IT/EC), BCA, MCA Description: 3+ years of experience in a data engineering or related role, managing complex data workflows and large-scale datasets. Expertise in SQL, with the ability to write, optimize, and troubleshoot complex queries for relational databases like MySQL, PostgreSQL, Oracle, or Snowflake. Proficiency in Python, including libraries like Pandas, PySpark, and others used for data manipulation and processing. Experience with big data tools such as Apache Spark, Hadoop, or Kafka. Data Engineer. Familiarity with cloud platforms (AWS, Azure, GCP) and their data processing services (e.g., AWS Glue, Google BigQuery, Azure Data Factory). Strong understanding of data modeling, normalization, and schema design. Experience in using version control systems like Git for collaborative development. Knowledge of CI/CD pipelines for data workflows is a plus. Strong problem-solving skills with attention to detail and the ability to debug data issues efficiently.Design, build, and maintain scalable data pipelines and architectures for large-scale data processing. Develop and optimize ETL/ELT processes for data ingestion, transformation, and loading. Collaborate with data analysts and scientists to ensure data accessibility and usability. Monitor and improve data quality and system performance. Implement best practices for data governance, security, and compliance. Work with cross-functional teams to integrate data systems with existing and new technology stacks. Utilize distributed computing tools (e.g., Spark, Hadoop) and cloud platforms (AWS, Azure, GCP) for efficient data handling. Develop automated workflows for data validation and reporting.

Posted 2 months ago

Apply

0 - 2 years

2 - 4 Lacs

Udaipur

Work from Office

Naukri logo

Python Developer Name:Python DeveloperRole:Python DeveloperIndustry:IT/ SoftwareLocation:Udaipur(Rajasthan)Job Type:Full TimeExperience:Freshers - 2yearsSkills:Python, Database libraries,data architecture, and dimension modeling,ETL/ELT frameworks Salary:Best in the industryEducation:BTech ( CS/ IT/ EC) Description: Execution of data architecture and data management projects for both new and established data sources. Innovate and contribute to the development of client’s data platforms using Python.Familiarity with transitioning existing data sets and databases to new technology stack is helpful.Manage the end-to-end process for data ingestion and publishing.Perform data loads and data quality analysis to identify potential errors within the data platform.Work closely with operation teams to understand data flow, architecture, and gather functional requirements. Experience in a data production environment, with a focus on adeptly managing vast volumes of intricate data.Hands-on experience in SQL programming, data architecture, and dimension modeling.Expertise in Python programming, showcasing deep knowledge of libraries such as Beautiful Soup, Selenium, Requests, Pandas, data structures, and algorithms.Proficiency in crafting efficient, reusable, and modular code.In-depth knowledge of the RDBMS with the ability to design and optimize complex SQL queries.Relational database experience with MySql, PostGres, Oracle or Snowflake is preferred.Expertise in mapping, standardizing, and normalizing data.Knowledge of ETL/ELT frameworks and writing pipelines for loading millions of records is helpful.Use of version control systems like Git, effectively managing code repositories.Strong analytical skills for addressing complex technical challenges, including proficiency in debugging and performance optimization techniques. Showcase a thorough understanding of the software development lifecycle, from requirements analysis to testing and deployment.

Posted 2 months ago

Apply

5 - 7 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

We are looking for an experienced Senior Data Scientist to join our analytics team and lead high-impact projects. This role requires a deep understanding of statistical analysis, machine learning, and data-driven strategy, with a focus on developing and delivering innovative analytics solutions to meet complex business needs. The ideal candidate will have extensive hands-on experience, strong technical skills, and proven leadership capabilities to guide a team of data scientists and machine learning engineers. Key Responsibilities: Statistical Analysis & Use Case Identification: Utilize advanced statistical techniques to analyze large datasets, uncovering trends and patterns that support data-driven decision-making. Identify and define use cases to support business strategies, using insights from data to generate value and guide strategic direction. Data Analytics Maturity Assessment: Assess the organizations data analytics maturity, utilizing descriptive data analysis to evaluate current capabilities. Develop and propose use cases for advanced analytics, including diagnostic, predictive, and prescriptive models to enhance data utilization. Technical Expertise: Apply hands-on expertise in Python and data science, employing a variety of machine learning algorithms for tasks like regression, classification, natural language processing (NLP), and operational research. Implement, optimize, and maintain complex models, ensuring they are scalable and effective. Advanced Machine Learning: Leverage experience with large language models (LLMs) and Retrieval-Augmented Generation (RAG) models to build sophisticated applications, including chatbots and other NLP-based solutions. Communication & Presentation: Demonstrate excellent communication skills to convey complex analytics insights to stakeholders. Lead customer meetings, presenting analytical findings and recommending actionable, data-driven solutions. Solution Development: Develop solutions aligned with business objectives by translating business requirements into impactful, data-driven solutions. Foster a solution-oriented mindset within the team, focusing on delivering analytics solutions that enhance decision-making and drive value. Team Management: Lead a team of data scientists and machine learning engineers, providing mentorship, technical guidance, and project oversight. Ensure alignment of team projects with business goals, maintaining a focus on quality and timely delivery. Experience: At least 5-7 years of hands-on experience in data science, machine learning, and statistical analysis. Proven track record in managing and mentoring a team of data scientists and engineers. Extensive experience with Python and essential libraries (e.g., Pandas, NumPy, Scikit-Learn, TensorFlow, PyTorch). Skills: Strong knowledge of machine learning techniques, including regression, classification, NLP, and operational research. Experience with deep learning, LLMs, and RAG models for advanced applications like chatbot development is highly desirable. Exceptional problem-solving skills, with an emphasis on delivering actionable business insights. Excellent communication, presentation, and interpersonal skills. Ability to work both independently and collaboratively in a fast-paced, dynamic environment.

Posted 2 months ago

Apply

1 - 2 years

2 - 3 Lacs

Mohali

Work from Office

Naukri logo

Role & responsibilities Develop backend applications using Python, Django, and FastAPI Solve complex problems with Data Structures & Algorithms (DSA) Design and implement REST APIs Manage and optimize SQL queries Work on system design and contribute to building products from scratch Handle large data sets efficiently and ensure performance Preferred candidate profile Proficient in Python, Django, and SQL Strong understanding of Data Structures & Algorithms (DSA) Experience with REST APIs and FastAPI Experience in System Design and High-Level Design (HLD) Ability to build and scale products from scratch Strong problem-solving and communication skills Experience required 1-2yrs Perks and benefits 5 days working Creative work culture Flexible working hours Excellent benefits

Posted 2 months ago

Apply

5 - 8 years

10 - 20 Lacs

Bengaluru

Hybrid

Naukri logo

Python developer with a strong understanding of Python libraries and frameworks, Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL, SQL Server). Good in Python coding Responsibilities and Deliverables: Develop and implement a parameter-driven reporting platform using Python and SQL. Design and build data models and queries to support dynamic report generation. Collaborate with stakeholders to gather and understand reporting requirements and translate them into technical specifications. Create and maintain scripts to automate data extraction, transformation, and loading processes. Develop and integrate modules to export reports in various formats, including Excel, PowerPoint, and PDF. Ensure the platform is scalable, efficient, and user-friendly. Conduct testing and debugging to ensure the accuracy and reliability of reports. Provide documentation and training materials for end-users and team members. Work closely with the project manager to ensure timely delivery of project milestones. Troubleshoot and resolve any issues related to the reporting platform. Technical Requirements: Bachelors degree in Computer Science, Information Technology, or a related field with 5 of proven experience as a Python developer, with a strong understanding of Python libraries and frameworks. Good communication/presentation skills and excellent stakeholder management Experience in a global organization working with stakeholders in different geographies. Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL, SQL Server). Experience in developing reporting platforms or similar applications. Familiarity with libraries and tools for generating reports in different formats (e.g., Pandas, Matplotlib, ReportLab, XlsxWriter, python-pptx). Strong problem-solving skills and attention to detail. Ability to work independently and manage time effectively in a temporary project setting. Knowledge of data visualization tools and techniques is a plus. If Interested please share your update CV to manikandan.nataraj@thakralone.in

Posted 2 months ago

Apply

4 - 7 years

15 - 25 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are looking for a Senior Risk & Fraud Data Analyst to play a pivotal role in safeguarding financial institutions from fraudulent activities. This role focuses on leveraging advanced analytics and Machine learning know-how to build and optimize fraud detection classification models and strategies and support our clients and banking partners in fraud mitigation. As a key member of our team, you’ll have the opportunity to work at the intersection of data-driven research, risk management, business strategy and client relations, contributing to the safety and scalability of our platform. How will you make an impact? Analyze transaction data and customer behavior to identify patterns indicative of fraudulent activity. Develop and deploy data-driven fraud prevention rules, logic, and models to mitigate risk effectively. Continuously monitor and refine existing fraud detection models based on emerging trends and feedback. Design and implement automation solutions to enhance fraud monitoring processes and support scalability. Communicate with clients and partner banks to align on fraud prevention strategies and share actionable insights. Work with product and dev and other teams in Unit to lead and deliver strategic and complex projects in the Fraud prevention domain. Have you got what it take: At least 5 years of experience in data analysis, preferably in the risk or fraud domain within the fintech industry. High proficiency in SQL – Must. Experienced in writing complex queries and analyzing large datasets. High proficiency in Python for data analysis – Must. Skilled in using Python libraries such as Pandas for data analysis and research, automation, and developing fraud detection logic. High proficiency in English and Strong written and verbal communication skills to effectively interact with clients, partners, and internal teams. Advantage: Understanding of the US financial systems – Familiarity with banking, payments, or e-commerce systems and processes. Advantage: Experience with financial fraud prevention strategies. Advantage: Ability to work independently, take initiative, and drive projects from ideation to completion. Great team player with desire to mentor junior analysts and Data scientists. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6464 Reporting into: Manager Role Type: Individual Contributor

Posted 2 months ago

Apply

6 - 10 years

16 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Key Responsibilities: Collect, clean, and preprocess large datasets for analysis. Develop and implement alization, and predictive modeling. Design and optimize data pipelines for efficient processing. Perform hypothesis testing and statistical analysis to derive insights. Collaborate with cross-functional teams to solve business challenges using data science techniques. Present findings and recommendations through reports and visualizations. Stay updated with advancements in data science, AI, and analytics tools.

Posted 2 months ago

Apply

3 - 6 years

3 - 8 Lacs

Gurgaon

Hybrid

Naukri logo

Experience on Python (Pandas, Numpy) SQL, ETL processes, Spark and Pyspark, Oracle & PostgreSQL, Familiarity with Big Data ecosystem and its components. Good knowledge of database principles, process, structures, and theories

Posted 2 months ago

Apply

5 - 10 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Hi Techies , Wishes from GSN !!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring PYTHON DEVELOPERs for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : Chennai 2. Job Role: Python Developer 3. EXPERIENCE : 5+ yrs 4. CTC Range: 15 LPA - 25 LPA 5. Work Type : WFO ( All 5 Days ) ****** Looking for SHORT JOINERs ****** Job Description : Experience as a Python Developer with a strong portfolio of projects. Knowledge in popular Python frameworks such as Django/Flask etc Experience in creating & maintaining APIs Good working understanding of cloud platforms specifically Azure. Working Knowledge about various Azure components like Blob , Azure functions , App service, App insights ,Azure data factory etc. Working knowledge of data analytics using Azure Databricks .Schedule & orchestrate workloads using Databricks Familiarity with Azure SQL database & SQL Alchemy Good problem-solving ability with solid communication and collaboration skills. Familiar working with an agile team & Agile practices Unit testing frameworks like pytest /unit test ****** Looking for SHORT JOINERs ****** Click APPLY ONLINE for IMMEDIATE response. Thanks & Regards, Shobana GSN Consulting Email: shobana@gsnhr.net Mob: 8939666294 Web: www.gsnhr.net Google Review : https://g.co/kgs/UAsF9W

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Hyderabad

Remote

Naukri logo

Job Title : Data Engineer Medical Data Collection and Aggregation Job Summary: We are seeking a skilled Data Engineer to join our team, focusing on the collection and aggregation of medical data from diverse sources to fine-tune our models. The ideal candidate will have a strong background in data engineering, with an emphasis on accuracy and reliability in the medical domain. Key Responsibilities: Data Pipeline Development:Design, develop, and maintain scalable data pipelines to collect and process medical data from various sources. Data Integration:Aggregate and integrate data from multiple systems, ensuring consistency and quality. Collaboration:Work closely with AI ML Engineers and domain experts to understand data requirements and ensure the availability of high-quality data for model fine-tuning. Data Quality Assurance: Implement data validation and cleansing procedures to maintain data accuracy and integrity. Documentation: Maintain comprehensive documentation of data sources, methodologies, and pipeline processes. Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Minimum of 5 years of proven experience as a Data Engineer, preferably in the medical or healthcare domain. Strong proficiency in Python and experience with data engineering libraries such as: Pandas: For data manipulation and analysis. NumPy: For numerical computing. SQLAlchemy: For database interactions. Apache Airflow: For workflow automation. Beautiful Soup: For web scraping. Experience with data pipeline and workflow management tools. Familiarity with database systems (SQL and NoSQL). Understanding of data privacy regulations and best practices in handling sensitive medical data. Preferred Skills: Experience with cloud platforms and services related to data processing. Knowledge of machine learning frameworks and model fine-tuning processes. Excellent problem-solving skills and attention to detail. Why Join Us? Lead and work on cutting-edge projects in the chemical industry. Collaborate with a team of top tech professionals. Opportunity for career growth in a dynamic and challenging environment. How to Apply Send your updated resume to: Primary Emai l: ankitha.reddy@ekshvaku.com CC : nohita.tammareddy@ekshvaku.com, sreemith.kushal@ekshvaku.com

Posted 2 months ago

Apply

0 - 1 years

7 - 11 Lacs

Kochi

Work from Office

Naukri logo

Collect business requirements and translate them into machine learning solutions. Prepare data for ML applications. Build ML models using supervised, unsupervised, or deep learning techniques. Optimize AI solutions for latency, speed, and accuracy, ensuring they are performant under high demand. Develop and integrate models into existing systems, ensuring smooth operation in the production environment. Stay up-to-date with the latest AI/ML trends and apply new techniques to improve existing systems. Collaborate with other team members. Requirements and Skills Should be interested to work in a startup culture/ecosystem. Willingness to learn with a "Never Die Attitude." Ability to work independently as well as in a team. Skilled in LLaMA models and transformer-based architectures, with experience in fine-tuning and adapting LLMs for specific tasks. Strong understanding of LLMs, RAG, and vector databases. Skilled in Python and ML libraries like TensorFlow, PyTorch, scikit-learn, and experience with data manipulation libraries (e.g., Pandas, NumPy). Knowledge of deploying ML models in production environments and familiarity with MLOps tools like Docker, Kubernetes, and CI/CD platforms. Proficiency in generative AI methodologies such as text generation, style transfer, speech recognition, image synthesis, and familiarity with tools and diffusion models. Strong commitment to ethical AI practices, including transparency, fairness, bias mitigation, and compliance with data privacy laws. Expertise in scaling machine learning models for high-traffic environments.

Posted 2 months ago

Apply

3 - 6 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. Youll be Responsible for ? Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. What Youd have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment : Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees www.tanla.com

Posted 2 months ago

Apply

4 - 7 years

5 - 15 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Job Title : Sr. Data Scientist Location : Mumbai / Pune Job Type : Full-time Experience : 4-7 years About the Role : We are seeking a Sr. Data Scientist with experience in Computer Vision to join our team. You will be responsible for building and maintaining models including but not limited to Vision Transformer neural networks. This role involves collaborating with data engineers and backend developers to deliver quality AI solutions. Responsibilities : Design and implement end-to-end machine learning workflows for image and computer vision applications, from data collection to model deployment. Collaborate with cross-functional teams, including data engineers, product managers, and domain experts, to define and prioritize machine learning initiatives. Document technical designs and model specifications, ensuring clarity and accessibility for stakeholders and team members. Ensure adherence to best practices in model development, deployment, and monitoring, in alignment with the overall AI strategy. Monitor model performance and implement strategies for continuous improvement and retraining as needed. Develop scalable and efficient deep learning models using PyTorch, optimizing for performance and resource utilization. Qualifications : Bachelors degree in Computer Science or a related field. 4-7 years of hands-on experience in developing and deploying machine learning models, particularly in computer vision tasks. Proficient in using PyTorch for developing deep learning models, with a strong understanding of CNNs, transfer learning, vision transformers, and data augmentation techniques. Solid understanding of computer vision concepts, including image classification, object detection, and image segmentation. Strong programming skills in Python, with experience in data manipulation libraries such as NumPy and Pandas. Experience with version control systems like Git. Excellent analytical and problem-solving skills, strong communication abilities, and a collaborative mindset. Preferred Qualifications : Experience with cloud platforms (e.g., AWS, GCP, Azure) and their ML services, particularly those related to model deployment and GPU training. Understanding of MLOps principles and practices, including model monitoring, versioning, and governance. Knowledge of GPU computing and tools for managing GPU resources (e.g., CUDA, cuDNN).

Posted 2 months ago

Apply

Exploring Pandas Jobs in India

The job market for pandas professionals in India is on the rise as more companies are recognizing the importance of data analysis and manipulation in making informed business decisions. Pandas, a popular Python library for data manipulation and analysis, is a valuable skill sought after by many organizations across various industries in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for pandas roles: 1. Bangalore 2. Mumbai 3. Delhi 4. Hyderabad 5. Pune

Average Salary Range

The average salary range for pandas professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹12-18 lakhs per annum.

Career Path

Career progression in the pandas domain typically involves moving from roles such as Junior Data Analyst or Data Scientist to Senior Data Analyst, Data Scientist, and eventually to roles like Tech Lead or Data Science Manager.

Related Skills

In addition to pandas, professionals in this field are often expected to have knowledge or experience in the following areas: - Python programming - Data visualization tools like Matplotlib or Seaborn - Statistical analysis - Machine learning algorithms

Interview Questions

Here are 25 interview questions for pandas roles: - What is pandas in Python? (basic) - Explain the difference between Series and DataFrame in pandas. (basic) - How do you handle missing data in pandas? (basic) - What are the different ways to create a DataFrame in pandas? (medium) - Explain groupby() in pandas with an example. (medium) - What is the purpose of pivot_table() in pandas? (medium) - How do you merge two DataFrames in pandas? (medium) - What is the significance of the inplace parameter in pandas functions? (medium) - What are the advantages of using pandas over Excel for data analysis? (advanced) - Explain the apply() function in pandas with an example. (advanced) - How do you optimize performance in pandas operations for large datasets? (advanced) - What is method chaining in pandas? (advanced) - Explain the working of the cut() function in pandas. (medium) - How do you handle duplicate values in a DataFrame using pandas? (medium) - What is the purpose of the nunique() function in pandas? (medium) - How can you handle time series data in pandas? (advanced) - Explain the concept of multi-indexing in pandas. (advanced) - How do you filter rows in a DataFrame based on a condition in pandas? (medium) - What is the role of the read_csv() function in pandas? (basic) - How can you export a DataFrame to a CSV file using pandas? (basic) - What is the purpose of the describe() function in pandas? (basic) - How do you handle categorical data in pandas? (medium) - Explain the role of the loc and iloc functions in pandas. (medium) - How do you perform text data analysis using pandas? (advanced) - What is the significance of the to_datetime() function in pandas? (medium)

Prepare and Apply Confidently

As you explore pandas jobs in India, remember to enhance your skills, stay updated with industry trends, and practice answering interview questions to increase your chances of securing a rewarding career in data analysis. Best of luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies