Jobs
Interviews

1583 Pandas Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

kerala

On-site

As a Data Analyst at our company based in Trivandrum, you will be responsible for analyzing large volumes of log data from NGINX server logs to identify user behavior patterns, anomalies, and security events. Your role will involve interpreting various fields such as IP addresses, geolocation data, user agents, request paths, status codes, and request times to derive meaningful insights. Collaboration with AI engineers is crucial as you will work together to propose relevant features based on log behavior and traffic patterns. Your responsibilities will include validating engineered features, conducting exploratory data analysis, and ensuring the quality and alignment of feature logic with real-world HTTP behavior and use cases. Furthermore, you will be involved in developing data visualizations to represent time-series trends, geo-distributions, and traffic behavior. Your collaboration with the frontend/dashboard team will be essential in defining and testing visual requirements and anomaly indicators for real-time dashboards. In addition to your analytical tasks, you will also be responsible for identifying and addressing gaps, inconsistencies, and errors in raw logs to ensure data quality. Creating documentation that explains observed behavioral patterns, feature assumptions, and traffic insights for knowledge sharing within the ML and security team will also be part of your role. The minimum qualifications for this position include a Bachelor's degree in Computer Science, Information Systems, Data Analytics, Cybersecurity, or a related field, along with at least 2 years of experience in data analysis or analytics roles. Proficiency in SQL, Elasticsearch queries, Python for data analysis, and experience working with web server logs or structured event data are required. Strong analytical thinking skills are essential to break down complex log behavior into patterns and outliers. It would be beneficial if you have familiarity with web security concepts, experience with log analytics platforms, an understanding of feature engineering concepts in ML pipelines, or experience working on anomaly detection or security analytics systems. This is a full-time position with benefits such as health insurance and Provident Fund, with a day shift schedule from Monday to Friday. If you possess the necessary qualifications and experience, we look forward to receiving your application.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a skilled Web Scraping Data Analyst, your primary responsibility will involve collecting, cleaning, and analyzing data from various online sources. You will leverage your expertise in Python-based scraping frameworks to design, develop, and maintain robust web scraping scripts using tools such as Python, BeautifulSoup, Scrapy, Selenium, and more. Additionally, you will be tasked with implementing IP rotation, proxy management, and anti-bot evasion techniques to ensure efficient data collection. Your role will be instrumental in constructing data pipelines that drive our analytics and business intelligence initiatives. Collaboration will be a key aspect of your work as you engage with data analysts and data engineers to integrate web-sourced data into internal databases and reporting systems. Furthermore, you will be involved in conducting exploratory data analysis (EDA) to derive valuable insights from the scraped data. It will be essential to adhere to website scraping policies, robots.txt guidelines, and relevant data privacy regulations to ensure compliance. To excel in this role, you should possess proficiency in Python and have experience with libraries like Requests, BeautifulSoup, Scrapy, and Pandas. Knowledge of proxy/VPN usage, IP rotation, and web traffic routing tools will be crucial for effective data collection. Familiarity with cloud platforms such as AWS, Azure, or GCP, as well as Linux-based environments, will be advantageous. Experience in deploying scraping scripts on edge servers or containerized environments and a solid understanding of HTML, CSS, JSON, and browser dev tools are also desirable skills. A strong analytical mindset coupled with experience in data cleansing, transformation, and visualization will be beneficial in handling large volumes of data and building efficient data pipelines. Proficiency in SQL and basic data querying will be necessary for data manipulation tasks. Preferred qualifications include experience with headless browsers like Puppeteer or Playwright, familiarity with scheduling tools like Airflow or Cron, and a background in data analytics or reporting using tools like Tableau, Power BI, or Jupyter Notebooks. This full-time role requires an in-person work location.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

As a Lead Data Scientist at our company, you will play a crucial role in our AI/ML team by leveraging your deep expertise in Generative AI, large language models (LLMs), and end-to-end ML engineering. Your responsibilities will involve designing and developing intelligent systems using advanced NLP techniques and modern ML practices. You will be a key player in building and optimizing ML pipelines and AI systems across various domains, as well as designing and deploying RAG architectures and intelligent chatbots. Collaboration with cross-functional teams to integrate AI components into scalable applications will be essential, along with providing technical leadership, conducting code reviews, and mentoring junior team members. You will drive experimentation with prompt engineering, agentic workflows, and domain-driven designs, while ensuring best practices in testing, clean architecture, and model reproducibility. To excel in this role, you must possess expertise in AI/ML, including Machine Learning, NLP, Deep Learning, and Generative AI (GenAI). Proficiency in working with the LLM stack, such as GPT, Chatbots, Prompt Engineering, and RAG, is required. Strong programming skills in Python, familiarity with essential libraries like Pandas, NumPy, and Scikit-learn, and experience with architectures like Agentic AI, DDD, TDD, and Hexagonal Architecture are essential. You should be comfortable with tooling and deployment using Terraform, Docker, REST/gRPC APIs, and Git, and have experience working on cloud platforms like AWS, GCP, or Azure. Familiarity with AI coding tools like Copilot, Tabnine, and hands-on experience with distributed training in NVIDIA GPU-enabled environments are necessary. A proven track record of managing the full ML lifecycle from experimentation to deployment is crucial for success in this position. Additionally, experience with vector databases, knowledge of GenAI frameworks like LangChain and LlamaIndex, contributions to open-source GenAI/ML projects, and skills in performance tuning of LLMs and custom models are considered advantageous. If you are passionate about leveraging AI technologies to deliver real-world solutions, we are excited to discuss how you can contribute to our cutting-edge AI/ML team.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Data Science Manager in the Research and Development (R&D) team at our organization, you will play a crucial role in driving innovation through advanced machine learning and AI algorithms. Your primary responsibility will involve conducting applied research, development, and validation of cutting-edge algorithms to address complex real-world problems on a large scale. You will collaborate closely with the product team to gain insights into business challenges and product objectives, enabling you to devise creative algorithmic solutions. Your role will entail creating prototypes and demonstrations to validate new ideas and transform research findings into practical innovations by collaborating with AI Engineers and software engineers. In addition, you will be responsible for formulating and executing research plans, carrying out experiments, documenting and consolidating results, and potentially publishing your work. It will also be essential for you to safeguard intellectual property resulting from R&D endeavors by working with relevant teams and external partners. Furthermore, part of your role will involve mentoring junior staff to ensure adherence to established procedures and collaborating with various stakeholders, academic/research partners, and fellow researchers to deliver tangible outcomes. To excel in this position, you are required to possess a strong foundation in computer science principles and proficient skills in analyzing and designing AI/Machine learning algorithms. Practical experience in several key areas such as supervised and unsupervised machine learning, reinforcement learning, deep learning, knowledge-based systems, evolutionary computing, probabilistic graphical models, among others, is essential. You should also be adept in at least one programming language and have hands-on experience in implementing AI/machine learning algorithms using Python or R. Familiarity with tools, frameworks, and libraries like Jupyter/Zeppelin, scikit-learn, matplotlib, pandas, Tensorflow, Keras, Apache Spark, etc., will be advantageous. Ideally, you should have at least 2-5 years of applied research experience in solving real-world problems using AI/Machine Learning techniques. Additionally, having a publication in a reputable conference or journal related to AI/Machine Learning or holding patents in the field would be beneficial. Experience in contributing to open-source projects within the AI/Machine Learning domain will be considered a strong asset. If you are excited about this challenging opportunity, please refer to the Job Code DSM_TVM for the position based in Trivandrum. For further details, feel free to reach out to us at recruitment@flytxt.com.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

You will be part of an innovative Indian direct-to-consumer hair care brand, Traya, which offers holistic treatment for individuals dealing with hair loss. As a pivotal member of the team, you will be responsible for owning Key Business Metrics for the web and app platforms in close collaboration with the leadership. Your role demands a high level of autonomy and involves hunting for valuable insights in data which can be translated into projects to drive revenue for the organization. Your core competencies will include data analysis, visualization, SQL, Python, and Pandas, along with tagging implementation reporting and optimization to generate enhanced insights from data for improving conversion rates and platform engagement. Collaboration will be key as you work across various departments such as product, tech, marketing, content, and brand teams. Experience with tools like Google Analytics (GA4), Google Tag Manager, Branch, AppsFlyer, MoEngage will be advantageous but not mandatory. This role offers a fantastic opportunity to be part of a fast-growing D2C health brand focused on revolutionizing health solutions for both men and women in India. We are seeking a smart, passionate professional with a strategic mindset who is eager to contribute to the rapid growth of the organization. Basic requirements include a minimum of 5 years of relevant experience, familiarity with D2C brands, and proficiency in Python, PySpark, and SQL. Desired qualities for the role include analytical skills, curiosity, attention to detail, self-motivation, teamwork, problem-solving abilities, efficient prioritization, adaptability to a fast-paced environment, positive attitude, creative thinking, and a hunger for rapid learning. Joining Traya offers a host of benefits such as exponential learning opportunities, personal and professional growth, mentorship, autonomy in decision-making, a data-driven work environment, a fun and enthusiastic culture, and top-notch digital marketing exposure.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

Schrdinger is a science and technology leader with over 30 years of experience developing software solutions for physics-based and machine learning-based chemical simulations and predictive analyses. We are currently looking for an application-focused Materials Informatics & Optoelectronics Scientist to join our team in our mission to enhance human health and quality of life by leveraging advanced computational methods. As a part of our Materials Science team, you will have the opportunity to engage in various projects spanning optoelectronics, catalysis, energy storage, semiconductors, aerospace, and specialty chemicals. The ideal candidate for this role will be a statistical and machine learning expert with strong problem-solving abilities. You should also be a materials science enthusiast familiar with RDkit, MatMiner, Dscribe, or other informatics packages. Proficiency in Python programming and debugging, along with knowledge of machine learning packages like Scikit-Learn, Pandas, NumPy, SciPy, and PyTorch is essential. Additionally, you should have experience in extracting datasets using large language models (LLM) or Optical Character Recognition (OCR) technologies. Collaboration with an interdisciplinary team in a fast-paced environment should be something you enjoy as a specialist in quantum chemistry or materials science. As a Materials Informatics & Optoelectronics Scientist, your responsibilities will include researching, curating, and analyzing datasets from literature and other sources using advanced techniques such as LLMs and OCR. You will collaborate with domain experts to ensure data accuracy and quality, particularly related to molecular structures, SMILES strings, and experimental measurements. Developing and validating predictive machine learning models for OLED devices and other optoelectronic applications will be a key aspect of your role. Furthermore, you will be expected to communicate results, present ideas to the team, develop tools and workflows for integration into commercial software products, and validate existing Schrdinger machine learning products using public or internally generated datasets. To qualify for this position, you should hold a PhD in Chemistry or Materials Science and possess hands-on experience in applying machine learning, neural networks, deep learning, data analysis, or chemical informatics to materials and complex chemicals. Familiarity with LLM, OCR technologies, and dataset extraction for ML model development is also required.,

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

As a Junior Full Stack Developer with a focus on Artificial Intelligence and Data Analytics, you will be an integral part of our team based in Bangalore. You will play a key role in the development and maintenance of web applications, collaborating with senior developers and data scientists to incorporate intelligent features that enhance our products. Your responsibilities will include assisting in the creation and upkeep of both front-end and back-end components, utilizing technologies such as HTML, CSS, JavaScript, and React basics for user interfaces. You will write efficient server-side code primarily in Python and support the integration of AI/Machine Learning models into our applications. Additionally, you will conduct data cleaning, analysis, and visualization to derive valuable insights from various datasets, while also contributing to database interactions and API development. To excel in this role, you should possess a Bachelor's degree in Computer Science, Data Science, Engineering, or a related field, or have equivalent practical experience. A solid understanding of web development concepts, familiarity with front-end frameworks like React.js, back-end languages such as Python or Node.js, and proficiency in Python programming are essential. Moreover, you should have knowledge of AI/Machine Learning principles, experience with data analysis libraries like Pandas and NumPy, and familiarity with data visualization tools/libraries such as Matplotlib, Seaborn, and D3.js basics. Your skills should also include a basic understanding of SQL for database querying, experience with version control systems like Git, and knowledge of cloud services such as AWS, Azure, or GCP for efficient API usage. Strong analytical and problem-solving capabilities, excellent communication skills, and a proactive attitude towards learning and growth in a dynamic environment are highly valued. This is a full-time position that requires your presence in the office from Monday to Friday. As we are looking for immediate joiners, please share your total and relevant experience along with your expected compensation during the application process. Join us in this exciting opportunity to contribute to cutting-edge technology solutions at our Bangalore location.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

The AI & Automation Analyst at SingleStore plays a crucial role within the Analytics, Enterprise Tech, and AI Enablement team, reporting to the head of Digital and AI Transformation. You will be an integral part of the new AI & Automation Center of Excellence, contributing to essential AI and automation solutions. Working collaboratively with various teams including analytics, engineering, business, and IT, you will be involved in designing and implementing initiatives that foster innovation, streamline operations, and enhance data-driven decision making. Your utilization of cutting-edge AI and automation technologies will be instrumental in addressing complex challenges, supporting SingleStore's enterprise digital transformation, and facilitating rapid AI adoption. Your efforts will not only drive immediate business successes but also pave the way for the future enhancement of operational and technical capabilities. In this role, your responsibilities will include engaging with internal stakeholders from different business units and functional departments to gather requirements and translate them into actionable AI and automation initiatives. You will collaborate closely with data scientists, engineers, and IT professionals to design, build, and implement scalable AI-driven solutions. Identifying and analyzing opportunities for process optimization, business automation, and operational improvement will be a key aspect of your role. You will lead the testing, validation, and iteration of AI and automation projects to ensure their robust performance, accuracy, and scalability in real-world environments. Monitoring implemented solutions, collecting feedback, and driving ongoing improvements while keeping abreast of advancements in AI technologies and best practices will be vital. Maintaining comprehensive technical documentation, preparing regular progress reports, and upholding best practices in data privacy, regulatory compliance, and ethical AI usage are also essential components of this position. **Required Qualifications:** - Bachelor's degree in Computer Science, Data Science, Engineering, or a related field. - Minimum of 2 years of experience in AI, machine learning, data science, data analysis, or automation roles. - Proficiency in at least one programming language (e.g., Python) and experience using data analysis libraries such as NumPy, Pandas, Matplotlib, and Scikit-learn. - Strong understanding of data analysis, statistical concepts, and data visualization techniques. - Familiarity with data processing tools and platforms (e.g., SQL). - Ability to distill complex data into actionable insights and communicate technical concepts clearly to diverse audiences. - Experience working in collaborative, cross-functional teams. - Excellent stakeholder communication and expectation management skills. - Eagerness to learn and adapt in a rapidly evolving technical environment. **Preferred Qualifications:** - Experience in SaaS/subscription-based business models. - Familiarity with business systems (e.g., Salesforce, Netsuite). - Hands-on experience integrating AI and Automation into both modern and legacy systems. - Demonstrable skills in problem-solving, project management, and continuous improvement in a growth-focused environment. **Benefits:** **Company Wide:** - Technology stipend for new employees. - Monthly cell phone and internet stipend. - Health and wellness benefits. - In-office catered lunches (where applicable) and monthly Grubhub credit. - Company and team events. - Flexible time off and volunteer time off. - Stock options. **Country Specific (United States):** - Comprehensive health benefits (medical, dental, vision). - 401(k) retirement plan. As SingleStore has employees in multiple countries, some benefits may vary to ensure equitable perks and benefits across all locations. **About SingleStore:** SingleStore provides a cloud-native database offering speed and scalability to power data-intensive applications worldwide. Their distributed SQL database unifies transactional and analytical workloads, enabling leaders to deliver exceptional, real-time data experiences for customers. SingleStore, headquartered in San Francisco with offices across North America, Europe, and Asia, is venture-backed. **Diversity & Inclusion:** SingleStore is committed to diversity and inclusion, valuing individuals who thrive in diverse teams and bring a broad range of perspectives to the organization.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

You should have 3-5 years of experience working as a Power BI Developer or in a similar data analytics role. The role is based in Mohali/Pune. As a Power BI Developer, you will develop and design Power BI dashboards, reports, and visualizations to support business decision-making. You will be responsible for extracting, transforming, and loading (ETL) data from multiple sources into Power BI using Power Query, Python, and SQL. Your role will involve optimizing the performance of Power BI reports to ensure efficiency in data processing and visualization. You will create and maintain data models with relationships, calculated columns, and measures using DAX. Additionally, you will utilize Python for data pre-processing, automation, and integration with Power BI. Working closely with business stakeholders, you will understand reporting requirements and deliver customized solutions. Ensuring data accuracy and consistency across reports and dashboards will be a key aspect of your responsibilities. You will also implement row-level security (RLS) and other security measures within Power BI reports. Integration of Power BI with other tools like Azure, Power Automate, Power Apps, and Python-based scripts may be required, so staying updated with the latest Power BI and Python features is essential for enhancing reporting capabilities. The ideal candidate should have strong proficiency in Power BI Desktop, Power Query, and Power BI Service along with hands-on experience with DAX (Data Analysis Expressions) and data modeling. Proficiency in SQL for querying and transforming data is necessary. Experience working with various data sources such as SQL Server, Excel, APIs, and cloud databases is required. Proficiency in Python for data analysis, automation, and integration with Power BI is a must. Familiarity with ETL processes, data warehousing concepts, and implementing Power BI security features like RLS is desired. Strong analytical and problem-solving skills are important for this role. Excellent communication skills are essential for interacting with stakeholders and translating business needs into reports. Hands-on experience in Pandas, NumPy, and Matplotlib for data analysis in Python is preferred. Experience integrating Python scripts within Power BI via Power Query or R/Python visuals is a plus. Experience with Azure Synapse, Power Automate, or Power Apps, as well as integrating Power BI with cloud platforms like AWS or Azure, will be beneficial. Familiarity with Agile/Scrum methodologies is an advantage. Qualifications required for this role include knowledge of machine learning concepts and predictive analytics using Python. Graduation is a mandatory requirement.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Arista Networks, a leader in data-driven, client-to-cloud networking solutions, is seeking a Software Engineer to join the Wi-Fi Data team in the Pune Development Center. Arista has a strong commitment to excellence, open standards, and diversity, creating an inclusive environment where innovation thrives. As a Software Engineer at Arista, you will collaborate with Data Scientists to develop and maintain data and AI/ML pipelines for the Cognitive Wi-Fi solution. This role offers significant growth opportunities within a small yet impactful team. You will be responsible for building ELT data pipelines, working on anomaly detection, root cause analysis, automatic remediation, and analytics use cases. Furthermore, you will develop and manage CI/CD pipelines for deployment and have the chance to showcase your work through talks and blog posts. The ideal candidate for this role should have a Bachelor's degree in Computer Science or a related field, proficiency in Python or Go, experience with databases (Relational and/or NoSQL), and hands-on expertise with DevOps tools like Jenkins, Git, Docker, Kubernetes, and Ansible. Familiarity with data processing libraries such as Apache Beam and data manipulation tools like Pandas would be advantageous. Arista Networks is known for its engineering-centric culture, where engineers have ownership of projects and access to various domains within the company. The company is headquartered in Santa Clara, California, with development offices worldwide, including Australia, Canada, India, Ireland, and the US. Arista values quality, respect, and innovation, offering a flat management structure and prioritizing test automation tools. If you are looking to shape the future of networking and be part of a dynamic team that values invention and fun, Arista Networks offers a unique opportunity to make a significant impact in the Wi-Fi space. Join Arista Networks and contribute to the growth and success of the Pune Development Center while working on cutting-edge technologies that redefine networking scalability and agility.,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role: Python Developer OOPs Skills: Python, OOps, Pandas, NumPy, Scikit-learn, SQL, Object-Oriented Programming Experience - 6+ years Location: Bangalore, Hyderabad, Chennai, Pune About the Role: We are seeking a highly skilled and experienced Senior Python Developer with experience of 5-12 to join our team. As a Senior Python Developer, you will be responsible for designing, developing, and maintaining high-quality, scalable, and robust Python applications, with a strong emphasis on Object-Oriented Programming principles. You will work closely with other developers, product managers, and stakeholders to deliver innovative and impactful solutions. Responsibilities: Design, develop, and implement well-tested, reusable, and maintainable Python code, adhering to OOP best practices. Collaborate with cross-functional teams to define, design, and ship new features. Troubleshoot and debug complex issues to ensure optimal application performance. Participate in code reviews and contribute to team knowledge sharing. Stay up-to-date with the latest industry trends and technologies. Contribute to the development and maintenance of our core Python applications and APIs. Design and implement RESTful APIs and microservices. Optimize and debug Python code for performance. Support data analysis and machine learning projects using Python libraries (e.g., Pandas, NumPy, Scikit-learn). Work with relational databases (e.g., PostgreSQL) and data manipulation techniques. Familiarity with DevOps tools and practices. Contribute to the development and maintenance of our CI/CD pipelines. Mentor and guide junior developers.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

15 - 25 Lacs

Ahmedabad

Work from Office

About the Role: We are looking for a highly skilled and motivated Data Scientist with 35 years of experience to join our Data Innovation team. The ideal candidate will have strong analytical skills, hands-on experience in Computer Vision and Large Language Models (LLMs), and a proven track record of turning complex data into actionable insights and intelligent solutions. This role will involve working on high-impact AI solutions for unstructured data, including images and documents. Designation – Data Scientist Key Responsibilities: Design, develop, and deploy models for computer vision tasks such as object detection, image classification, OCR, segmentation, etc. Build and fine-tune LLM-based pipelines for tasks such as information extraction, document summarization, chat-based interfaces, and automated reasoning. Work closely with cross-functional teams (engineering, product, business) to translate business problems into data science solutions. Conduct exploratory data analysis, statistical modelling, and feature engineering across diverse data sources. Evaluate model performance using appropriate metrics and iterate for improvement. Stay up to date with the latest research and advancements in AI/ML, computer vision, and NLP. Required Qualifications: Bachelor’s or master’s degree in computer science, Data Science, Engineering, or related fields. 3–5 years of hands-on experience in Data Science or Machine Learning roles. Strong proficiency in Python and ML libraries such as scikit-learn, pandas, PyTorch/TensorFlow, OpenCV, Langchain etc. Experience with Computer Vision frameworks (e.g., YOLO, Detectron2, MM Detection) and LLMs (e.g., Hugging Face Transformers, Lang Chain, LLaMA, GPT). Experience with real-world model deployment and evaluation. Strong grasp of machine learning concepts, statistical analysis, and data manipulation. Excellent problem-solving, communication, and stakeholder collaboration skills. Good to have: Exposure to the lending, finance, or banking domain (e.g., credit scoring, income estimation, fraud detection). Experience with document AI or multi-modal AI systems. Familiarity with cloud platforms (AWS/GCP/Azure) and MLops tools. Experience working with OCR engines (Tesseract, PaddleOCR, etc.) and integrating CV+LLM pipelines. Understanding of RAG pipelines, fine-tuning of open-source LLMs, or prompt engineering. Location – Ahmedabad (Onsite) Experience – 3 to 5 Years

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 9 Lacs

Gurugram, Chennai, Bengaluru

Work from Office

Experience inimplementing & designing data pipelines using Azure Data Factory Advanced knowledge of Azure Functions,Event Grid Data processing using Azure AI services,cognitive services,Azure cognitive search Stakeholder management,project managementRole & responsibilities

Posted 2 weeks ago

Apply

2.0 - 7.0 years

14 - 18 Lacs

Gurugram

Work from Office

We are seeking a highly skilled and experienced Data Scientist to join our team based in Bangalore. As a Data Scientist, you will play a critical role in developing and implementing AI/ML solutions. You will be responsible for utilizing your expertise in Python, SQL, and various frameworks like Scikit Learn, TensorFlow, and PyTorch to deliver production-grade AI/ML projects. The ideal candidate will have 2+ years of experience in Data Science, AI/ML, and a solid understanding of mathematical and statistical concepts. Experience in the US Healthcare domain and knowledge of Big Data and data streaming technologies are desirable. Primary Responsibilities: Develop and implement AI/ML solutions using Python, pandas, numpy, and SQL Utilize frameworks such as Scikit Learn, TensorFlow, and PyTorch to build and deploy models Perform exploratory data analysis (EDA), statistical analysis, and feature engineering to uncover insights and patterns in data Build, tune, and evaluate machine learning models for predictive and prescriptive analytics Conduct drift analysis to monitor model performance and ensure accuracy over time Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Work on the full life cycle of AI/ML projects, including data preparation, model development, tuning, and deployment Ensure the scalability, reliability, and efficiency of AI/ML solutions in production environments Stay updated with the latest advancements in AI/ML techniques and tools, and identify opportunities to apply them to enhance existing solutions Document and communicate findings, methodologies, and insights to technical and non-technical stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Statistics, or a related field 10+ years of overall experience with 5+ years of experience in Data Science, AI/ML, or a similar role Hands-on experience in delivering production-grade AI/ML projects Experience with the full life cycle of AI/ML projects, including EDA, model development, tuning, and drift analysis Solid understanding of mathematical and statistical concepts Solid programming skills in Python, with experience in pandas, numpy, and SQL Proficiency in frameworks such as Scikit Learn, TensorFlow, and PyTorch Proven excellent problem-solving and analytical thinking skills Proven solid communication and collaboration skills to work effectively in a team environment Preferred Qualifications: Knowledge of Big Data technologies (PySpark, Hadoop) and data streaming tools (Kafka, etc.) Familiarity with the US Healthcare domain At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #nic

Posted 2 weeks ago

Apply

7.0 - 12.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: BE/BTECH with 7+ yrs of hands on experience required with Python, SQL,API. Hands-on experience with DevOps practices, including CI/CD, version control (e.g., Git), and deployment automation Extensive experience with Python programming, including knowledge of Python frameworks such as Pandas Solid understanding of API design principles and experience in integrating third-party APIs Solid proficiency in SQL and database management systems (DBMS) like MySQL, PostgreSQL, or Oracle Preferred Qualifications: Experience with containerization and orchestration tools like Docker and Kubernetes Familiarity with cloud platforms like AWS, Azure, or Google Cloud At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission.#Nic

Posted 2 weeks ago

Apply

2.0 - 4.0 years

3 - 5 Lacs

Chennai

Work from Office

Role & responsibilities Develop and maintain web applications using Python, Django and Flask Work with SQL and NoSQL databases Collaborate with cross-functional teams to deliver high-quality solutions Write clean, efficient and well-documented code Stay updated with emerging technologies Required Skills: Python, Python Automation, Django, Flask, MySQL, SQL, NoSQL, SQLalchemy Numpy GIT AWS Pandas Rest API Services Strong debugging and deployment skills Good to Have: PHP, CodeIgniter Basic front-end knowledge (HTML, CSS, JS)

Posted 2 weeks ago

Apply

2.0 - 7.0 years

11 - 15 Lacs

Maharashtra

Work from Office

Responsibilities : Lead AI Strategy: Oversee AI/ML projects from development to deployment, ensuring alignment with business goals. Model Development & Fine-tuning: Lead the fine-tuning of LLMs and advanced NLP models to meet enterprise needs. Team Leadership: Manage and mentor data science teams, fostering growth and innovation. Cross-functional Collaboration: Work with engineers, product teams, and leadership to drive AI initiatives. Optimization & Deployment: Ensure efficient workflows, containerization (Docker), and scalable model deployment. Client Engagement: Provide strategic insights and translate AI capabilities into business solutions. Good to Have: Cloud Platform Expertise: Experience working with AWS, Azure, or GCP on large-scale AI/ML projects. Enterprise Solutions: Familiarity with enterprise Knowledge Management Systems and their integration with AI-driven tools. Advanced MLOps: Deep understanding of MLOps principles and tools to ensure efficient model deployment and monitoring. Problem-Solving: Ability to break down complex problems into manageable components and design effective solutions. Innovation: A passion for experimenting with cutting-edge AI/ML technologies and applying them to real-world business problems. Required Skills: Technical Skills: Proficiency in Python, with strong expertise in libraries such as PyTorch, Pandas, and NumPy. Extensive experience in fine-tuning large language models (LLMs) and applying advanced NLP techniques. Strong background in handling both structured and unstructured data, with hands-on experience working on complex data transformations. Proficiency in MLOps tools, including Docker, Kubernetes, and cloud platforms (AWS, Azure, or GCP). In-depth knowledge of model optimization, deployment pipelines, and scalable production environments. Ability to design and implement AI/ML workflows that balance speed and quality in a fast-paced environment. Leadership Skills: Proven ability to lead and manage teams of data scientists, driving results through mentorship and strategic guidance. Strong project management skills, with experience handling multiple AI projects simultaneously and ensuring timely delivery. Exceptional communication skills to work with cross-functional teams and explain complex AI concepts to non-technical stakeholders.

Posted 2 weeks ago

Apply

0.0 - 4.0 years

1 - 5 Lacs

Mumbai

Work from Office

Responsibilities : Manipulate and preprocess structured and unstructured data to prepare datasets for analysis and model training. Utilize Python libraries like PyTorch, Pandas, and NumPy for data analysis, model development, and implementation. Fine-tune large language models (LLMs) to meet specific use cases and enterprise requirements. Collaborate with cross-functional teams to experiment with AI/ML models and iterate quickly on prototypes. Optimize workflows to ensure fast experimentation and deployment of models to production environments. Implement containerization and basic Docker workflows to streamline deployment processes. Write clean, efficient, and production-ready Python code for scalable AI solutions. Good to Have: Exposure to cloud platforms like AWS, Azure, or GCP. Knowledge of MLOps principles and tools. Basic understanding of enterprise Knowledge Management Systems. Ability to work against tight deadlines. Ability to work on unstructured projects independently. Strong initiative and self-motivated Strong Communication & Collaboration acumen. Required Skills: Proficiency in Python with strong skills in libraries like PyTorch, Pandas, and NumPy. Experience in handling both structured and unstructured datasets. Familiarity with fine-tuning LLMs and understanding of modern NLP techniques. Basics of Docker and containerization principles. Demonstrated ability to experiment, iterate, and deploy code rapidly in a production setting. Strong problem-solving mindset with attention to detail. Ability to learn and adapt quickly in a fast-paced, dynamic environment. What we Offer: Opportunity to work on cutting-edge AI technologies and impactful projects. A collaborative and growth-oriented work environment. Competitive compensation and benefits package. A chance to be a part of a team shaping the future of enterprise intelligence.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Pyramid Overview As a Senior AI Engineer in Data Sciences, you will play crucial role in designing , implementing, and optimizing the AI solutions in production. Additionally, youll apply best practices in software design, participate in code reviews, create a maintainable well-tested codebase with relevant documentation. We will look to you to understand and actively follow foundational programming principles (best practices, know about unit tests, code organization, basics of CI/CD etc.) and create a well-maintainable & tested codebase with relevant documentation. You will get the opportunity to develop in one or more approved programming languages (Java, Scala, Python, R), and learn and adhere to best practices in data analysis and data understanding. Team Overview The Content Creation team within Data Sciences will help build AI solutions leveraging GenAI , computer vision and traditional ML models for various marketing or site content use-cases . The team will play a crucial role in helping Target drive relevancy with guests. P osition Overvie w Support Development of Generative AI ApplicationsArchitect and develop advanced generative AI solutions that support business objectives , ensuring high performance and scalability. Performance Tuning & Optimization Identify bottlenecks in applications and implement strategies to improve performance. Optimize machine learning models for efficiency in production environments. Collaborate Cross-FunctionallyWork closely with data scientists, product managers, and other stakeholders to gather requirements and transform them into robust technical solutions. Research & InnovationStay abreast of the latest advancements in the field of Artificial Intelligence. Propose new ideas that could lead to innovations within the organization. Deployment & Scaling Strategies Support the deployment process of applications on cloud platforms while ensuring they are scalable to handle increasing loads without compromising performance. Documentation & Quality AssuranceDevelop comprehensive documentation for projects undertaken. Implement rigorous testing methodologies to ensure high-quality deliverables. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Masters in Computer Science or o ver 3 plus years of experience in end-to-end application development, data exploration, data pipelining, API design, optimization of model latency Experience in working with image, text data, embeddings , building & deploying vision models , integrating with Gen AI services Experience working with Machine Learning frameworks such as TensorFlow, PyTorch or similar libraries Hands-on experience in libraries like NumPy, SciPy, Pandas, OpenCV, SpaCY , NLTK is expected Good understanding of Big Data and Distributed Architecture- specifically Hadoop, Hive, Spark, Docker, Kubernetes and Kafka Experience working on GPU s is preferred Experience optimizing machine learning models for performance improvements across various platforms including cloud services (AWS, Google Cloud Platform) Expertise in MLOps frameworks and hands on experience in MLOps tools like Kubeflow, MLFlow or Sagemaker Excellent problem-solving skills combined with strong analytical abilities. Know More here Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 2 weeks ago

Apply

4.0 - 9.0 years

14 - 19 Lacs

Bengaluru

Work from Office

As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Pyramid O verview As a Lead AI Engineer in Data Sciences, you will play crucial role in designing, implementing, and optimizing the AI solutions in production. Additionally, youll apply best practices in software design, participate in code reviews, create a maintainable well-tested codebase with relevant documentation. We will look to you to understand and actively follow foundational programming principles (best practices, know about unit tests, code organization, basics of CI/CD etc.) and create a well-maintainable & tested codebase with relevant documentation. You will get the opportunity to develop in one or more approved programming languages (Java, Scala, Python, R ) and learn and adhere to best practices in data analysis and data understanding. Team Overview The Content Creation team within Data Sciences will help build AI solutions leveraging GenAI , computer vision and traditional ML models for various marketing or site content use-cases . The team will play a crucial role in helping Target drive relevancy with guests . Position Overvie w Lead Development of Generative AI ApplicationsArchitect and develop advanced generative AI solutions that support business objectives , ensuring high performance and scalability. Performance Tuning & Optimization Identify bottlenecks in applications and implement strategies to improve performance. Optimize machine learning models for efficiency in production environments. Collaborate Cross-FunctionallyWork closely with data scientists, product managers, and other stakeholders to gather requirements and transform them into robust technical solutions. Mentor Junior EngineersProvide guidance and mentorship to team members on best practices in coding standards, architectural design, and machine learning techniques. Research & InnovationStay abreast of the latest advancements in the field of Artificial Intelligence. Propose new ideas that could lead to innovations within the organization. Deployment & Scaling StrategiesLead the deployment process of applications on cloud platforms while ensuring they are scalable to handle increasing loads without compromising performance. Documentation & Quality AssuranceDevelop comprehensive documentation for projects undertaken. Implement rigorous testing methodologies to ensure high-quality deliverables. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Masters in computer science or equivalent industry experience Over 6 plus years of experience in end-to-end application development, data exploration, data pipelining, API design, optimization of model latency in production environments at scale. Experience in working with image, text data, embeddings , building & deploying vision models , integrating with Gen AI services Strong expertise in Machine Learning frameworks such as TensorFlow, PyTorch or similar libraries; experience with Generative Adversarial Networks or Diffusion Models is desirable Hands-on experience in libraries like NumPy, SciPy, Pandas, OpenCV, SpaCY , NLTK is expected Experience in optimizing ML model performance using techniques such as hyperparameter tuning, feature selection, and distributed training Good understanding of Big Data and Distributed Architecture- specifically Hadoop, Hive, Spark, Docker, Kubernetes and Kafka Experience working on GPU s is preferred Proven track record of optimizing machine learning models for performance improvements across various platforms including cloud services (AWS, Google Cloud Platform) Expertise in MLOps frameworks and hands on experience in MLOps tools like Kubeflow, MLFlow or Sagemaker Deep understanding of system architecture principles related to scalability and robust application design. Excellent communication, problem-solving skills combined with strong analytical abilities. Proven leadership skills with experience mentoring junior engineers effectively Know More here Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 2 weeks ago

Apply

1.0 - 5.0 years

8 - 12 Lacs

Bengaluru

Work from Office

We are seeking for apassionate and skilled Python AI Engineer to design, develop and maintain hybrid AI Platform across multi-cloud and on-premises. Build an AI platform that enables real time machine learning and GenAI at scale along with governance and security frameworks.You will collaborate with data engineer, product managers, and software engineers to bring AI-driven products and features to life. Job description Work with frameworks like TensorFlow/PyTorch, Scikit-learn, or similar. Design and implement AI/ML models and algorithms using Python3. Develop and maintain scalable, production-grade machine learning pipelines. Conduct data exploration, preprocessing, feature engineering, and model evaluation. Optimize models for performance and scalability in production environments. Collaborate with cross-functional teams to integrate AI components into real-world applications. Stay up to date with the latest research and industry trends in AI and machine learning. Document experiments, code, and processes for reproducibility and transparency. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong programming skills in Python 3. Solid understanding of machine learning fundamentals (classification, regression, clustering, etc.). Experience with ML libraries and frameworks (e.g., Scikit-learn, TensorFlow/PyTorch, XGBoost, etc.). Experience in NLP related to Semantic models/Search using BERT/ transformer model. Experience with Gen AI ecosystem/tools is a plus. Experience in data wrangling using Pandas/Polaris, NumPy, SQL, etc. Good grasp of software engineering principles (version control, testing, modular code). Preferred technical and professional experience Familiarity with REST APIs and deployment practices (Dockerized Container, Flask/FastAPI, etc.). Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Problem-SolvingExcellent analytical and problem-solving skills, with the ability to think critically and creatively. CommunicationStrong interpersonal and communication skills, with the ability to work effectively in a collaborative team environment

Posted 2 weeks ago

Apply

7.0 - 11.0 years

20 - 25 Lacs

Bengaluru

Work from Office

We are seeking for apassionate and skilled Python Backend Data Engineerto design, develop and maintain hybrid data Platform across multi-cloud and on-premises.You will collaborate with AI engineers, product managers, and software engineers to bring data driven products and features to life. Job description: Build high-performing, scalable, enterprise-grade applications. Manage Big Data application development across the full software development lifecycle. Cloud platforms experience (AWS, GCP, Azure). Experience with CI/CD pipelines for data engineering workflows. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must havePython3, Fast API, Spark, Iceberg. RESTful web services, Kafka messaging service. Experience with Gen AI ecosystem/tools is a plus. Experience in data wrangling using Pandas/Polaris, NumPy, SQL, etc. No SQL & SQL databases (Postgres/MySQL, Mongo with GridFs) Modern authorization mechanisms (JSON Web Token, OAuth2) Preferred technical and professional experience Object-Oriented analysis and design (DDD, Microservices) Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Any exposure to any BI tools is a plus. Problem-SolvingExcellent analytical and problem-solving skills, with the ability to think critically and creatively. CommunicationStrong interpersonal and communication skills in English, with the ability to work effectively in a collaborative team environment

Posted 2 weeks ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Bengaluru

Work from Office

We are seeking for apassionate and skilled Python Backend Data Engineerto design, develop and maintain hybrid data Platform across multi-cloud and on-premises.You will collaborate with AI engineer, product managers, and software engineers to bring data driven products and features to life. Job description: Build high-performing, scalable, enterprise-grade applications. Manage Big Data application development across the full software development lifecycle. Cloud platforms experience (AWS, GCP, Azure). Experience with CI/CD pipelines for data engineering workflows. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must havePython3, Fast API, Spark, Iceberg. RESTful web services, Kafka messaging service. Experience with Gen AI ecosystem/tools is a plus. Experience in data wrangling using Pandas/Polaris, NumPy, SQL, etc. No SQL & SQL databases (Postgres/MySQL, Mongo with GridFs) Modern authorization mechanisms (JSON Web Token, OAuth2) Preferred technical and professional experience Object-Oriented analysis and design (DDD, Microservices) Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Any exposure to any BI tools is a plus. Problem-SolvingExcellent analytical and problem-solving skills, with the ability to think critically and creatively. CommunicationStrong interpersonal and communication skills in English, with the ability to work effectively in a collaborative team environment

Posted 2 weeks ago

Apply

2.0 - 6.0 years

5 - 10 Lacs

Bengaluru

Work from Office

We are seeking for apassionate and skilled Python Backend Data Engineerto design, develop and maintain hybrid data Platform across multi-cloud and on-premises.You will collaborate with AI engineers, product managers, and software engineers to bring data driven products and features to life. Job description: Build high-performing, scalable, enterprise-grade applications. Manage Big Data application development across the full software development lifecycle. Cloud platforms experience (AWS, GCP, Azure). Experience with CI/CD pipelines for data engineering workflows. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must havePython3, Fast API, Spark, Iceberg. RESTful web services, Kafka messaging service. Experience with Gen AI ecosystem/tools is a plus. Experience in data wrangling using Pandas/Polaris, NumPy, SQL, etc. No SQL & SQL databases (Postgres/MySQL, Mongo with GridFs) Modern authorization mechanisms (JSON Web Token, OAuth2) Preferred technical and professional experience Object-Oriented analysis and design (DDD, Microservices) Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Any exposure to any BI tools is a plus. Problem-SolvingExcellent analytical and problem-solving skills, with the ability to think critically and creatively. CommunicationStrong interpersonal and communication skills in English, with the ability to work effectively in a collaborative team environment

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

Bengaluru

Work from Office

We are Hiring | Senior Python Developer @ GSPANN Technologies, Bangalore Are you a seasoned software developer with 5 to 10 years of experience and a passion for building scalable backend systems? Were looking for someone just like you! Location: Bangalore Qualification: Bachelors degree in Engineering Availability: Immediate joiners preferred Key Responsibilities: Design and develop scalable Python applications using FastAPI Collaborate with cross-functional teams including front-end, data science, and DevOps Work with libraries like Pandas, NumPy, Scikit-learn for data-driven solutions Build and maintain robust backend APIs and database integrations Implement unit, integration, and end-to-end testing Contribute to architecture and design using best practices Mandatory Skills: Strong Python expertise with data libraries (Pandas, NumPy, Matplotlib, Plotly) Experience with FastAPI/FlaskAPI , SQL/NoSQL (MongoDB, Postgres, CRDB) Middleware orchestration (Mulesoft, BizTalk) CI/CD pipelines, RESTful APIs, OOP, and design patterns Desirable Skills: Familiarity with OpenAI tools (GitHub Copilot, ChatGPT API) Experience with Azure , Big Data , Kafka/RabbitMQ , Docker/Kubernetes Exposure to distributed and high-volume backend systems Apply Now! Send your updated resume to heena.ruchwani@gspann.com or DM me directly . Referrals are highly appreciated!

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies