Jobs
Interviews

1583 Pandas Jobs - Page 41

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

17 - 22 Lacs

Noida

Work from Office

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired by a collaborative community of colleagues around the world, and where you ll be able to reimagine what s possible. Join us and help the world s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Design, develop, and implement conversational AI solutions using the Kore.ai platform, including chatbots, virtual assistants, and automated workflows tailored to business requirements. Customize, configure, and optimize the Kore.ai platform to address specific business needs, focusing on user interface (UI) design, dialogue flows, and task automation. Collaborate with cross-functional teams to understand business objectives and translate them into scalable AI solutions. Continuously monitor and enhance bot performance by staying updated on the latest Kore.ai features, AI advancements, and machine learning trends. Primary Skills 7+ years of experience in software development, with at least 4+ years of hands-on experience working with the Kore.ai platform. Proven expertise in developing chatbots and virtual assistants using Kore.ai tools. Proficiency in programming languages such as JavaScript or other scripting languages. Experience with API integrations, including RESTful APIs and third-party services. Strong understanding of dialogue flow design, intent recognition, and context management. Secondary Skills Kore.ai platform certification is a strong advantage. Knowledge of automation tools and RPA technologies is beneficial.

Posted 1 month ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

Hyderabad

Hybrid

About the Role We are seeking a talented AI Engineer with 2-3 years of professional experience to join our growing team. As a AI Engineer, you will collaborate with data scientists, software engineers, and product teams to design, build, and deploy ML models that drive value across our organisation. This is an excellent opportunity for someone early in their ML career who wants to gain hands-on experience with real-world applications while working alongside experienced professionals. Responsibilities Implement and optimize ML algorithms and models based on requirements from data scientists and product teams Work with Large Language Models (LLMs) to develop AI-powered applications and features Design prompts, fine-tune, and evaluate LLM performance for specific use cases Build systems to integrate LLMs with existing products and services Design and develop ML/DL pipelines for data preprocessing, model training, evaluation, and deployment Experience in using RAG Collaborate with software engineers to integrate ML/DL models into production systems Monitor and maintain deployed ML/DL models, ensuring reliability and performance Participate in code reviews and contribute to engineering best practices Stay current with the latest ML/DL research and technologies, evaluating their potential application to business problems Assist in collecting and preparing datasets for model training and validation Document model architecture, data flows, and deployment procedures Strong in Machine Learning and Deep learning Basics Requirements Bachelor's degree in Computer Science, Statistics, Applied Mathematics, or a related field (Master's preferred but not required) 1-2 years of professional experience building and deploying machine learning models Strong programming skills in Python and familiarity with ML frameworks such as TensorFlow, PyTorch, or scikit-learn Experience with Large Language Models (LLMs), including understanding of transformer architectures, fine-tuning approaches, and prompt engineering Knowledge of NLP concepts and techniques applicable to LLMs (embeddings, tokens, context windows, etc.) Experience with data processing libraries like Pandas, NumPy, and visualization tools like Matplotlib or Seaborn Understanding of ML fundamentals, including supervised/unsupervised learning, feature engineering, and model evaluation Knowledge of software engineering best practices (version control, testing, CI/CD) Basic understanding of cloud platforms (AWS, GCP, or Azure) and containerization (Docker)

Posted 1 month ago

Apply

7.0 - 8.0 years

22 - 37 Lacs

Bengaluru

Hybrid

Description - External Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead and manage a team of data scientists, fostering a collaborative and innovative environment Oversee the design and development of predictive models and machine learning algorithms Collaborate with business stakeholders to understand and define their data needs Translate business problems into data-driven solutions, presenting complex results in a clear and actionable manner Monitor model performance and continuously improve the accuracy of predictions and insights Maintain up-to-date knowledge of the latest data science trends and technologies, implementing new methodologies as appropriate Promote data literacy throughout the organization, educating staff on how to use data to drive decision-making Ensure data integrity and compliance with privacy regulations Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Role & responsibilities Required Qualifications: Bachelor of Engineering or equivalent 10+ years of total experience Coding : Python, pandas, numpy Math behind ML and DL algorithms Basics of Statistics ML and DL concepts Preferred Qualification: Exposure GenAI and LLMs Preferred candidate profile Perks and benefits

Posted 1 month ago

Apply

2.0 - 6.0 years

0 - 1 Lacs

Pune

Work from Office

As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights

Posted 1 month ago

Apply

2.0 - 6.0 years

0 - 1 Lacs

Pune

Work from Office

As Lead ML Engineer , you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. As Lead ML Engineer, you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. Responsibilities: Build and deploy models for forecasting and optimization Perform time-series analysis, classification, and regression Monitor model performance and integrate feedback loops Use AWS SageMaker, MLflow, and explainability tools (e.g., SHAP or LIME)

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 32 Lacs

Hyderabad

Work from Office

Product Engineer (Onsite, Hyderabad) Experience: 5 - 8 Years Exp Salary : INR 30-32 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 9:00AM to 6:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python, FastAPI, Django, MLFlow, feast, Kubeflow, Numpy, Pandas, Big Data Good to have skills : Banking, Fintech, Product Engineering background IF (One of Uplers' Clients) is Looking for: Product Engineer (Onsite, Hyderabad) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Product Engineer Location: Narsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About the Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role and Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: We foster business expansion through our innovative products and services, facilitating the seamless adoption of cloud-native technologies by companies. Our expertise lies in the revitalization of applications and infrastructure, harnessing the power of cloud-native solutions for enhanced resilience and scalability. As pioneering Kubernetes partners, we have been dedicated contributors to the open-source cloud-native community, consistently achieving nearly 100% growth over the past few years. We take pride in spearheading local chapters of Serverless & Kubernetes Meetup, actively participating in the development of a vibrant community dedicated to cutting-edge technologies within the Cloud and DevOps domains. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.

Posted 1 month ago

Apply

2.0 - 3.0 years

4 - 6 Lacs

Bengaluru

Work from Office

ob Title: Python Developer Machine Learning & AI (2–3 Years Experience) Job Summary: We are seeking a skilled and motivated Python Developer with 2 to 3 years of experience in Machine Learning and Artificial Intelligence. The ideal candidate will have hands-on experience in developing, training, and deploying machine learning models, and should be proficient in Python and associated data science libraries. You will work with our data science and engineering teams to build intelligent solutions that solve real-world problems. Key Responsibilities: Develop and maintain machine learning models using Python. Work on AI-driven applications, including predictive modeling, natural language processing, and computer vision (based on project requirements). Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Preprocess, clean, and transform data for training and evaluation. Perform model training, tuning, evaluation, and deployment using tools like scikit-learn, TensorFlow, or PyTorch. Write modular, efficient, and testable code. Document processes, models, and experiments clearly for team use and future reference. Stay updated with the latest trends and advancements in AI and machine learning. Required Skills: 2–3 years of hands-on experience with Python programming. Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn , pandas , NumPy , Matplotlib , and Seaborn . Exposure to deep learning frameworks like TensorFlow , Keras , or PyTorch . Good understanding of data structures and algorithms. Experience with model evaluation techniques and performance metrics. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Strong analytical and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field. Experience with deploying ML models using Flask , FastAPI , or Docker . Knowledge of MLOps and model lifecycle management is an advantage. Understanding of NLP or Computer Vision is a plus.

Posted 1 month ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Roles & Responsibilities: Exp level: 10 years Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Technical Skills: Implementing data governance with monitoring, alerting, reporting Technical Writing capability: documenting standards, templates and procedures. Databricks Knowledge on patterns for scaling ETL pipelines effectively Orchestrating data analytics workloads – Databricks jobs and workflows Integrating Azure DevOps CI/CD practices with data pipeline development ETL modernization, Data modelling Strong exposure to Azure Data services, Synapse, Data orchestration & Visualization Data warehousing & Data lakehouse architecrures Data streaming & Ream time analytics Python PySpark Library, Pandas Azure Data Factory Data orchestration Azure SQL Scripting, Querying, stored procedures

Posted 1 month ago

Apply

6.0 - 10.0 years

12 Lacs

Hyderabad

Work from Office

Dear Candidate, We are seeking a highly skilled and motivated Software Engineer with expertise in Azure AI , Cognitive Services, Machine Learning , and IoT. The ideal candidate will design, develop, and deploy intelligent applications leveraging Azure cloud technologies, AI-driven solutions, and IoT infrastructure to drive business innovation and efficiency. Responsibilities: Develop and implement AI-driven applications using Azure AI and Cognitive Services . Design and deploy machine learning models to enhance automation and decision-making processes. Integrate IoT solutions with cloud platforms to enable real-time data processing and analytics. Collaborate with cross-functional teams to architect scalable, secure, and high-performance solutions. Optimize and fine-tune AI models for accuracy, performance, and cost-effectiveness. Ensure best practices in cloud security, data governance, and compliance. Monitor, maintain, and troubleshoot AI and IoT solutions in production environments. Stay updated with the latest advancements in AI, ML, and IoT technologies to drive innovation. Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Strong experience with Azure AI, Cognitive Services, and Machine Learning. Proficiency in IoT architecture, data ingestion, and processing using Azure IoT Hub , Edge , or related services. Expertise in deploying and managing machine learning models in cloud environments. Strong understanding of RESTful APIs, microservices, and cloud-native application development. Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes). Knowledge of cloud security principles and best practices. Excellent problem-solving skills and the ability to work in an agile development environment. Preferred Qualifications: Certifications in Microsoft Azure AI, IoT, or related cloud technologies. Experience with Natural Language Processing (NLP) and Computer Vision. Familiarity with big data processing and analytics tools such as Azure Data. Prior experience in deploying edge computing solutions. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 15 Lacs

Gurugram

Work from Office

T he Team: Financial Risk Analytics at S&P Global provides products and solutions to financial institutions to measure and manage their counterparty credit risk, market risk, regulatory risk capital and derivative valuation adjustments. Using the latest analytics and technology such as a fully vectorized pricing library, Machine Learning and a Big Data stack for scalability, our products and solutions are used by the largest tier-one banks to smaller niche firms. Our products are available deployed, in the cloud or can be run as a service. We have a need for an enthusiastic and skilled Senior Python developer who is interested in learning about quantitative analytics and perhaps looking to make a career at the intersection of Financial Analytics, Big Data and Mathematics! The Impact: You will be working on a strategic component that allows clients to on-demand extract data required for pricing and risk calculations. This is an essential entry point to a risk calculation which requires speed to market and good design to drive efficient and robust workflows. Whats in it for y ou: The successful candidate will gain exposure to risk analytics and latest trending technology that allows you to grow into a hybrid role specializi ng in both financial markets and technology a highly rewarding, challenging, and marketable position to gain skills in. Responsibilities: The successful candidate will work on the Market Risk solution with a technology stack that is best of breed, involving Python 3.10+ , Airflow, Pandas, NumPy , ECS (AWS) . You will join a fast-paced, dynamic team environment, building commercial products that are at the heart of the business and contributing directly to revenue generation. Design and implement end to end applications in Python with an emphasis on efficiently writing functions on large datasets. Interpret and analyse business use-cases and feature requests into technical designs and development tasks. Participate in regular design and code review meetings. Be a responsive team player in system architecture and design discussions. Be proud of the high quality of your own work. Always follow quality standards (unit tests, integration tests and documented code) Happy to coach and mentor junior engineers. Be delivery focused, have a passion for technology and enjoy offering new ideas and approaches. Demonstrable technical capacity in understanding technical deliveries and dependencies. Strong experience in working in software engineering projects in an Agile manner. What Were Looking For: Bachelors degree in computer science, Engineering, or a related discipline, or equivalent experience Computer Science and Software Engineering: Strong software development experience Minimum 8years' experience in developing applications using Python. Experience using Python 3.10+ Core Python with rich knowledge in OO methodologies and design. Experience writing python code that is scalable and performant. Experience/exposure to complex data types when designing and anticipating issues that impact performance (under ETL processes) by generating metrics using industry adopted profiling tools during development. Experience working on AWS, ECS, S3 and ideally MWAA (hosted Airflow on AWS) Experience working in data engineering/orchestration and scalable efficient flow design. Experience in developing data pipelines using Airflow. Good working competency in Docker, Git, Linux Good working knowledge of Pandas and NumPy Understanding of CI/CD pipelines Test frameworks. Agile and XP (Scrum, Kanban, TDD) Experience with cloud-based infrastructures, preferably with AWS. Fluent in English Passionate individual who thrives development, data and is hands on.

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 20 Lacs

Mumbai, Chennai, Bengaluru

Work from Office

Strong Python programming skills with expertise in Pandas, Lxml, ElementTree, File I/O operations, Smtplib, and Logging libraries Basic understanding of XML structures and ability to extract key parent & child xml tag elements from XML data structure Required Candidate profile Java Spring Boot API Microservices - 8+ years of exp + SQL 5+ years of exp + Azure 3+ years of experience

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 22 Lacs

Mumbai

Work from Office

Strong Python programming skills with expertise in Pandas, Lxml, ElementTree, File I/O operations, Smtplib, and Logging libraries Basic understanding of XML structures and ability to extract key parent & child xml tag elements from XML data structure Required Candidate profile Java Spring Boot API Microservices - 8+ years of exp + SQL 5+ years of exp + Azure 3+ years of experience

Posted 1 month ago

Apply

7.0 - 12.0 years

8 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Minimum of 8 years of related experience Bachelor's degree or equivalent experience Expertise in Python; solid experience in building enterprise applications using Python. Experience with Python multithreading and multiprocessing applications Good understanding of pandas, libraries. Strong experience with SQL (ideally Snowflake) Familiarity with AWS Cloud technologies Experience working in Agile Development teams Experience with enterprise CI/CD tools ex: Jenkins, version control systems ex: Git Experience with development of financial applications, ideally related to Risk Management Hands on experience with writing automated test cases Ex: pytest or unittest Ability to work independently and in a distributed team environment Preferred candidate profile

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Chennai, Bengaluru

Work from Office

Key Responsibilities: Design, develop, and deploy RPA bots using Automation Anywhere (A2019 or newer) to automate complex business processes Write and integrate Python scripts to enhance automation, including tasks like web scraping, API calls, and file handling Collaborate with business analysts, SMEs, and stakeholders to understand automation requirements and deliver scalable solutions Develop and maintain process documentation including PDDs (Process Design Documents) and SDDs (Solution Design Documents) Continuously monitor and troubleshoot bots for optimal performance and reliability Ensure solutions are secure, compliant, and scalable as per organizational and industry standards Participate in testing, code reviews, and deployment activities Required Skills: Hands-on experience with Automation Anywhere A2019 or later Strong Python programming skills including: Data manipulation with Pandas (DataFrames, cleaning, filtering, merging) Numerical operations with NumPy Excel automation using Openpyxl and XlsxWriter Experience in web scraping, API integration, file operations in Python Familiarity with SQL and database tools (SQL Server, MySQL, etc.) Solid understanding of the RPA lifecycle, bot management, and best practices Strong debugging and problem-solving skills, and high attention to detail Interested Candidates Please Share the Following Details with Your Resume: Full Name Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Role & responsibilities Preferred candidate profile Job Title: SnapLogic with Python Location : Pan India (WFO) Experience Level: 5 to 8 Years Job Description Essential Skills Must have technical skills Knowledge of latest Python frameworks and technologies (e.g., Django, Flask, FastAPI) Experience with SnapLogic cloud-native integration platform. Ability to design and implement integration pipelines using SnapLogic Experience with Python libraries and tools (e.g., Pandas, NumPy, SQLAlchemy) Strong experience in designing, developing, and maintaining RESTful APIs. Familiarity with API security, authentication, and authorization mechanisms (e.g., OAuth, JWT) Good experience and hands-on knowledge of PL/SQL (Packages/Functions/Ref cursors) Good to have technical skills: Hands-on experience with Kubernetes for container orchestration Knowledge of deploying, managing, and scaling applications on Kubernetes clusters Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda). Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Key Responsibilities Develop and maintain high-quality Python code for API services. Design and implement containerized applications using Kubernetes. Utilize AWS services for cloud infrastructure and deployment. Create and manage integration pipelines using SnapLogic. Write and optimize PL/SQL stored procedures for database operations. Collaborate with cross-functional teams to deliver high-impact solutions. Ensure code quality, security, and performance through best practices.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 32 Lacs

Mysuru

Work from Office

Role & responsibilities Position 2: Python Developer Job description Experience: 5 Years Location: Mysore Technical Skills: Programming: Python (Pandas, NumPy, Matplotlib, Seaborn, Requests) BI Tools: Power BI (DAX, Power Query, Data Modeling, Report Design) Data Handling: SQL, Excel, CSV, JSON, APIs, RESTful Services Visualization: Power BI Reports & Dashboards, Python Plotting Libraries Cloud: Azure / AWS (basic understanding of data services) Version Control: Git, GitHub Roles & Responsibilities: Develop and maintain Python-based data pipelines to collect, clean, and transform data from various sources (databases, APIs, files). Design and deploy interactive Power BI dashboards and reports to visualize key metrics and KPIs. Use DAX and Power Query to create calculated columns, measures, and efficient data models in Power BI. Automate repetitive data tasks using Python scripts and integrate them into reporting workflows. Collaborate with business stakeholders to gather requirements and translate them into visual and analytical solutions. Optimize data models and queries for performance and scalability across Power BI and Python environments. Publish and manage Power BI dashboards via Power BI Service and configure scheduled data refreshes. Preferred candidate profile

Posted 1 month ago

Apply

1.0 - 6.0 years

1 - 3 Lacs

Pune

Work from Office

Experience: 1yr- 5yrs Salary: As per company norms Location: Chinchwad, Pune - Maharashtra Joining: Immediate Industry Type: Education / Training Department: Trainer/ Teaching Faculty Work Mode: Work From Office (Full time Job) Salary: As per the company norms Notice Period : Immediate Job Description : We are looking for an experienced and dynamic Data Science Trainer to join our team. The ideal candidate will be responsible for designing and delivering training sessions on business analytics concepts, tools, and applications. You will equip learners with the skills needed to interpret data, derive insights, and make strategic business decisions. Skills: Power BI, Advance Excel, SQL, Machine learning , Artificial Intelligence, Python Responsibilities and Duties - Devise technical training programs according to organizational requirements. Produce training schedules and classroom agenda & execute training sessions Determine course content according to objectives. Prepare training material (presentations, worksheets etc.) Keep and report data on completed courses, absences, issues etc. Observe and evaluate results of training programs. Determine overall effectiveness of programs and make improvements.. (Note: PART TIME WORKER PLEASE DO NOT APPLY) Interested candidates can drop their CV on apardeshi@sevenmentor.com OR contact on 8806178325

Posted 1 month ago

Apply

2.0 - 5.0 years

5 - 15 Lacs

Chennai

Work from Office

Role: Python Developer Experience: 2-5 years Location: Chennai (Work-from-Office) Summary of the profile: Develops information systems by studying operations; designing, developing, and installing software solutions, supports and develops software team. What you'll do here: Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle. Implement and maintain Django-based applications Use server-side logic to integrate user-facing elements. Develop software related to asset management Write and implement software solutions that integrate different systems. Identify and suggest various opportunities to improve efficiency and functionality. Coordinating the workflow between the graphic designer, the HTML coder, and yourself Creating self-contained, reusable, and testable modules and components Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Unit-test code for robustness, including edge cases, usability, and general reliability. Provides information by collecting, analyzing, and summarizing development and service issues. What you will need to thrive: 2 to 6 years of experience in Django, Python, APIs, Numpy, Pandas, PostgreSQL, Git, AWS, Docker, REST, NoSQL, MySQL, JavaScript Experience of working with Windows and Linux operating systems. Familiarity with Python modules like SQL Alchemy, Sanic, Flask, Django, Numpy, Pandas and Visualization modules Experience in writing tests with PyTest Strong knowledge of REST API Experience of working in an Agile environment, particularly with Scrum/Kanban processes The ability to deliver required functionality working with UI as well as database engineers Good team working skills are a must for this role Good problem solving abilities Accountability and strong attention to detail Working knowledge of GIT and GITHUB Education & Experience: Bachelors degree in IT Information Security or related field required; Masters degree Desirable. Core competencies: • Communication, Coding & Team management

Posted 1 month ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Chennai

Work from Office

JOB DESCRIPTION Title: Senior Software Engineer - Data Science Full time Chennai Data Scientist Role Type: Location: Reports to: Summary of the profile: We are seeking a highly motivated and innovative Data Scientist with 3-5 years of experience to join our team, with a focus on leveraging Generative AI and Large Language Models (LLMs). The ideal candidate will be passionate about exploring the potential of these cutting-edge technologies to solve complex problems and create impactful solutions. You will collaborate with cross-functional teams to research, develop, and deploy generative AI models and LLM-powered applications. What youll do here: Explore and experiment with various generative AI models (e.g., GANs, VAEs, diffusion models) and LLMs (e.g., GPT, BERT, Llama). ¢ Fine-tune pre-trained LLMs for specific tasks and domains. Develop and implement generative models for tasks like data augmentation, content generation, and synthetic data creation. ¢ ¢ ¢ Prompt engineering, and prompt iteration. Collect, clean, and preprocess large text datasets for LLM training and evaluation. Perform natural language processing (NLP) tasks such as text classification, sentiment analysis, and entity recognition. ¢ ¢ Use LLMs to assist in data cleaning and data understanding. Develop and evaluate machine learning models, including those leveraging generative AI and LLMs. ¢ Evaluate model performance using relevant metrics and techniques, including those specific to LLMs (e.g., perplexity, BLEU score). ¢ ¢ Tune hyperparameters to optimize model performance. Create compelling visualizations and reports to communicate insights from generative AI and LLM experiments. ¢ ¢ Present findings to both technical and non-technical audiences. Collaborate with engineers, product managers, and researchers to define project requirements and deliver solutions. ¢ ¢ Communicate effectively with team members and stakeholders, both verbally and in writing. Stay up-to-date with the latest advancements in generative AI and LLMs. Copyright © RESULTICKS Solution Inc 1 ¢ ¢ Assist in deploying generative AI models and LLM-powered applications to production environments. Monitor model performance and retrain models as needed. What you will need to thrive: ¢ ¢ 3-5 years of professional experience in data science, with a growing interest in Generative AI and LLMs. Proficiency in Python and relevant libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers). ¢ ¢ ¢ ¢ ¢ ¢ Experience with NLP techniques and libraries. Understanding of machine learning algorithms and statistical concepts. Experience with data visualization tools (e.g., Matplotlib, Seaborn, Tableau, Power BI). Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience with Prompt engineering, version control systems like Git and cloud platforms like GCP, AWS or Azure is a plus Preferred Skills: ¢ ¢ ¢ ¢ ¢ ¢ Experience with fine-tuning LLMs for specific tasks. Experience with deploying LLMs in production environments. Knowledge of deep learning architectures relevant to generative AI and LLMs. Familiarity with vector databases. Experience with RAG (Retrieval Augmented Generation) architectures. Experience with the Langchain Library. Education: ¢ Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field. Core competencies: ¢ Communication, Analytical & Problem-Solving Skills Copyright © RESULTICKS Solution Inc 2 About RESULTICKS RESULTICKS is a fully integrated connected experience platform for Real Time Audience Engagement. RESULTICKS empowers brands to create data-driven communications to their audiences and drives business decisions through its award-winning, AI-powered big data cloud solution. Supported by the worlds first customer data blockchain, RESULTICKS unlocks incredible digital transformation possibilities by delivering the full 360 view of the customer through data consolidation and omnichannel orchestration. Built from the ground up by experts in marketing, technology, and business management, RESULTICKS enables brands to make a transformational leap to conversion- driven, growth-focused personalized engagement. With an advanced CDP at its core. RESULTICKS offers AI-powered omnichannel orchestration, complete analytics, next-best engagement, attribution at the segment-of-one level, and the worlds first marketing block chain. ¢ RESULTICKS has been named to the Gartners Magic Quadrant 2021 for Multichannel Marketing Hubs for five years in a row and has been awarded with the Microsoft Elevating Customer Experience with AI award. ¢ Headquartered in Singapore and New York City, RESULTICKS is one of the top global martech solutions servicing both B2B/B2B2C and B2C segments. RESULTICKS global presence includes the United States, India, Singapore, Indonesia, Malaysia, Vietnam, Thailand, and the Philippines. Watch video on RESULTICKS - https://www.youtube.com/watch?v=G_OwGy6unP8 Copyright © RESULTICKS Solution Inc 3

Posted 1 month ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Apex 2000 INC is looking for Python Developer to join our dynamic team and embark on a rewarding career journey Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Integrating user-facing elements using server-side logic. Assessing and prioritizing client feature requests. Integrating data storage solutions. Reprogramming existing databases to improve functionality. Developing digital tools to monitor online traffic. Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Implement security and data protection solutions Assess and prioritize feature requests Coordinate with internal teams to understand user requirements and provide technical solutions. Python Developer, Flask, My SQL, Django

Posted 1 month ago

Apply

3.0 - 8.0 years

6 - 14 Lacs

Chennai

Work from Office

Job Title : Python - AI/ML Developer Job Type - Full Time, Permanent Work Type - Work from Office Location - Chennai. Job Description We are seeking highly skilled and motivated Python Developers with AI/ML experience and familiar with prompt engineering and LLM integrations to join our Innovations Team. The team is responsible for exploring emerging technologies, building proof-of-concept (PoC) applications, and delivering cutting-edge AI/ML solutions that drive strategic transformation and operational efficiency. As a core member of the Innovations Team, you will work on AI-powered products, rapid prototyping, and intelligent automation initiatives across domains such as mortgage tech, document intelligence, and generative AI. Key Responsibilities: Design, develop, and deploy scalable AI/ML solutions and prototypes. Build data pipelines, clean datasets, and engineer features for training models. Apply deep learning, NLP, and classical ML techniques. Integrate AI models into backend services using Python (e.g., FastAPI, Flask). Collaborate with cross-functional teams (e.g., UI/UX, DevOps, product managers). Evaluate and experiment with emerging open-source models (e.g., LLaMA, Mistral, GPT). Stay current with advancements in AI/ML and suggest opportunities for innovation. Required Skill Set : Technical Skills: Programming Languages: Python (strong proficiency), experience with NumPy, Pandas, Scikit-learn. AI/ML Frameworks: TensorFlow, PyTorch, HuggingFace Transformers, OpenCV (nice to have). NLP & LLMs: Experience with language models, embeddings, fine-tuning, and vector search. Prompt Engineering: Experience designing and optimizing prompts for LLMs (e.g., GPT, Claude, LLaMA) for various tasks such as summarization, Q&A, document extraction, and multi-agent orchestration. Backend Development: FastAPI or Flask for model deployment and REST APIs. Data Handling: Experience in data preprocessing, feature engineering, and handling large datasets. Version Control: Git and GitHub. Database Experience: SQL and NoSQL databases; vector DBs like FAISS, ChromaDB, or Qdrant preferred. Nice to Have (Optional): Experience with Docker, Kubernetes, or cloud environments (Azure, AWS). Familiarity with LangChain, LlamaIndex, or multi-agent frameworks (CrewAI, AutoGen). Soft Skills: Strong problem-solving and analytical thinking. Eagerness to experiment and explore new technologies. Excellent communication and teamwork skills. Ability to work independently in a fast-paced, dynamic environment. Innovation mindset with a focus on rapid prototyping and proof-of-concepts. Educational Qualification: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Certifications in AI/ML or cloud platforms (Azure ML, TensorFlow Developer, etc.) are a plus.

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Navi Mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred technical and professional experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Python Developer Location: Bengaluru, Karnataka, India Experience Level: 35 Years Employment Type: Full-Time Role Overview We are seeking a skilled Python Developer with a strong background in data manipulation and analysis using NumPy and Pandas, coupled with proficiency in SQL. The ideal candidate will have experience in building and optimizing data pipelines, ensuring efficient data processing and integration. Key Responsibilities Develop and maintain robust data pipelines and ETL processes using Python, NumPy, and Pandas. Write efficient SQL queries for data extraction, transformation, and loading. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Implement data validation and quality checks to ensure data integrity. Optimize existing codebases for performance and scalability. Document processes and maintain clear records of data workflows. Required Qualifications Bachelors degree in Computer Science, Engineering, or a related field. 25 years of professional experience in Python development. Proficiency in NumPy and Pandas for data manipulation and analysis. Strong command of SQL and experience with relational databases like MySQL, PostgreSQL, or SQL Server. Familiarity with version control systems, particularly Git. Experience with data visualization tools and libraries is a plus. Preferred Skills Experience with data visualization libraries such as Matplotlib or Seaborn. Familiarity with cloud platforms like AWS, Azure, or GCP. Knowledge of big data tools and frameworks like Spark or Hadoop. Understanding of machine learning concepts and libraries. Why Join Enterprise Minds Enterprise Minds is a forward-thinking technology consulting firm dedicated to delivering next-generation solutions. By joining our team, you'll work on impactful projects, collaborate with industry experts, and contribute to innovative solutions that drive business transformation.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies