Jobs
Interviews

1583 Pandas Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Google Cloud Platform Architecture Good to have skills : Google Cloud Machine Learning ServicesMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a seasoned Senior Manager CCAI Architect with deep expertise in designing and delivering enterprise-grade Conversational AI solutions using Google Cloud's Contact Center AI (CCAI) suite. This role demands a visionary leader who can bridge the gap between business objectives and AI-driven customer experience innovations. As a senior leader, you will drive architectural decisions, guide cross-functional teams, and define the roadmap for scalable and intelligent virtual agent platforms. Roles & Responsibilities:- Own the end-to-end architecture and solution design for large-scale CCAI implementations across industries.- Define best practices, reusable frameworks, and architectural patterns using Google Dialogflow CX, Agent Assist, Knowledge Bases, and Gen AI capabilities.- Act as a strategic advisor to stakeholders on how to modernize and transform customer experience through Conversational AI.- Lead technical teams and partner with product, operations, and engineering leaders to deliver high-impact AI-first customer service platforms.- Oversee delivery governance, performance optimization, and scalability of deployed CCAI solutions.- Evaluate and integrate cutting-edge Gen AI models (LLMs, PaLM, Gemini) to enhance virtual agent performance and personalization.- Enable and mentor architects, developers, and consultants on Google Cloud AI/ML tools and CCAI strategies.- 10+ years of experience in enterprise solution architecture, with 4+ years in Google Cloud AI/ML and Conversational AI platforms.- Deep expertise in Dialogflow CX, CCAI Insights, Agent Assist, Gen AI APIs, and GCP architecture.Strong leadership in managing large transformation programs involving AI chatbots, voice bots, and omnichannel virtual agents.- Proven ability to engage with senior stakeholders, define AI strategies, and align technical delivery with business goals.- Experience integrating AI solutions with CRMs, contact center platforms (e.g., Genesys, Five9), and backend systems.- Certifications in Google Cloud (e.g., Professional Cloud Architect, Cloud AI Engineer) are a strong plus.- Exceptional communication, thought leadership, and stakeholder management skills. Professional & Technical Skills: - Must Have Skills: CCAI/Dialogflow CX hands on experience and generative AI understanding.- Good To Have Skills: Cloud Data Architecture, Cloud ML/PCA/PDE Certification, - Strong understanding of AI/ML algorithms, NLP and techniques.- Experience with chatbot, generative AI models, prompt Engineering.V- Experience with cloud or on-prem application pipeline with production-ready quality. Additional Information:- The candidate should have a minimum of 18+ years of experience in Google Cloud Machine Learning Services/Gen AI/Vertex AI/CCAI.- The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions.- A 15-year full time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Analytics Practitioner Project Role Description : Drive innovation and Intellectual property (IP) around specific analytics models and offerings Must have skills : Python (Programming Language) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Any degree in computer science Summary :As an Analytics Practitioner, you will drive innovation and intellectual property around specific analytics models and offerings. A typical day involves collaborating with various teams to enhance analytics capabilities, developing new models, and ensuring the effective implementation of analytics solutions that meet business needs. You will engage in problem-solving activities, leveraging your expertise to provide insights and recommendations that support strategic decision-making. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with organizational goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Strong analytical skills to interpret complex data sets.- Experience with data manipulation and analysis libraries such as Pandas and NumPy.- Familiarity with data visualization tools to present findings effectively.- Ability to develop and implement machine learning models. Additional Information:- The candidate should have minimum 7.5 years of experience in Python (Programming Language).- This position is based in Hyderabad.- Any degree in computer science is required. Qualification Any degree in computer science

Posted 1 month ago

Apply

7.0 - 12.0 years

4 - 9 Lacs

Noida

Work from Office

We are looking for a highly skilled Senior Python Developer with strong expertise in Django to join our engineering team. In this role, you will be responsible for designing, developing, and maintaining scalable web applications. You will work closely with cross-functional teams to deliver high-quality software solutions that meet business objectives. Key Responsibilities: Design and develop robust, scalable, and secure backend services using Python and Django . Lead the development of key modules and provide guidance to junior developers. Architect APIs and integrate third-party services. Write reusable, testable, and efficient code following best practices. Collaborate with front-end developers to integrate user-facing elements using server-side logic. Perform code reviews, mentor team members, and enforce coding standards. Optimize application performance and troubleshoot performance bottlenecks. Ensure data security and compliance with best practices. Participate in system architecture and design decisions. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, or a related field. 7–11 years of professional experience in Python development . Atleast 4+ years of experience with Django framework. Strong understanding of RESTful APIs , ORM , and Django architecture . Experience with PostgreSQL , MySQL , or similar relational databases. Familiarity with Docker , Kubernetes , or other container technologies. Proficiency with version control systems (e.g., Git). Experience in CI/CD pipelines and deployment automation. Familiarity with cloud platforms like AWS , Azure , or GCP . Strong analytical and problem-solving skills. Excellent communication and leadership abilities. Nice to Have: Experience with Django REST Framework (DRF). Exposure to front-end technologies like React , Vue , or Angular . Experience with asynchronous programming and tools like Celery or RabbitMQ . Familiarity with microservices architecture . Contributions to open-source projects or technical blogs.

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Ahmedabad

Work from Office

We are seeking a skilled Python Developer to join our team. The ideal candidate will be responsible for working with existing APIs or developing new APIs based on our requirements. You should have a strong foundation in Python and experience with RESTful services and cloud infrastructure. Requirements: Strong understanding of Python Experience with RESTful services and cloud infrastructure Ability to develop microservices/functions Familiarity with libraries such as Pandas, NumPy, and Matplotlib Basic understanding of SQL and databases Ability to write clean, maintainable code Experience deploying applications at scale in production environments Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium Knowledge of equities, futures, or options microstructures is a plus Experience with data visualization and dashboard building is a plus Why Join Us? Opportunity to work on high-impact real-world projects Exposure to cutting-edge technologies and financial datasets A collaborative, supportive, and learning-focused team culture 5-day work week (Monday to Friday)

Posted 1 month ago

Apply

3.0 - 4.0 years

3 - 6 Lacs

Bengaluru

Work from Office

About the job : Developing and maintaining Python applications, designing and optimizing application performance, and implementing security and data protection with MongoDB. Responsibilities : - Develop applications : Create and maintain software applications using Python - Optimize performance : Design and optimize application performance, usability, and scalability - Troubleshoot : Debug applications to ensure low latency and high-availability - Implement security : Implement database security, backup, and recovery best practices - Automate tasks : Design and maintain Python scripts and codes to automate tasks - Database management : Experience with MongoDB or similar databases, including database design, query optimization, and data modeling Skills : - Proficiency in Python programming language (Pandas, BeautifulSoup, FastAPI) - Ability to design and maintain Python scripts and codes - Ability to troubleshoot and debug applications - Ability to implement test-driven development and automated testing - Ability to maintain and configure MongoDB instances - Ability to create RESTful API with FastAPI or Django - Ability to parse PDF and XML files - Strong analytical and problem-solving skills - Familiarity with version control systems like Git

Posted 1 month ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Mumbai

Work from Office

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, NumPy, Data Cleaning, Python, Model evaluation, pandas, Statistics We are seeking a talented Data Scientist II to join our team. The ideal candidate will have 2-5 years of experience in data science and possess expertise in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. ** Duties and Responsibilities: Develop and implement machine learning models to extract insights from large datasets. Utilize deep learning techniques to enhance data analysis and predictive modeling. Write efficient Python code to manipulate and analyze data. - Work with SQL databases to extract and transform data for analysis. Utilize Amazon Redshift for data warehousing and analytics. Apply NLP techniques to extract valuable information from unstructured data. - Utilize AWS Cloud services for data storage, processing, and analysis. Qualifications and Requirements: Bachelor's degree in Computer Science, Statistics, Mathematics, or related field. - 3-5 years of experience in data science or related field. Proficiency in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. * *Key Competencies - Strong analytical skills. - Problem-solving abilities. - Proficiency in machine learning and deep learning techniques. Excellent programming skills in Python. - Knowledge of SQL and database management. - Familiarity with Amazon Redshift, NLP, and AWS Cloud services. ** Performance Expectations: Develop and deploy advanced machine learning models. Extract valuable insights from complex datasets. Collaborate with cross-functional teams to drive data-driven decision-making. Stay updated on the latest trends and technologies in data science. We are looking for a motivated and skilled Data Scientist I to join our team and contribute to our data-driven initiatives. If you meet the qualifications and are passionate about data science, we encourage you to apply.

Posted 1 month ago

Apply

2.0 - 7.0 years

15 - 25 Lacs

Pune

Work from Office

Experience: 2 + years Expected Notice Period: 30 Days Shift: (GMT+05:30) Asia/Kolkata (IST) Opportunity Type: Office (Pune) Placement Type: Full Time Permanent position Must have skills required: Airflow, LLMs, NLP, Statistical Modeling, Predictive Analysis, Forecasting, Python, SQL, MLFlow, pandas, Scikit-learn, XgBoost As an ML / Data Science Engineer at Anervea, youll work on designing, training, deploying, and maintaining machine learning models across multiple products. Youll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. Youll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required : Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering, model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes : Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomesnot just code Bonus Qualities : Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions

Posted 1 month ago

Apply

2.0 - 4.0 years

2 - 8 Lacs

Jaipur

Work from Office

Responsibilities: Work on end-to-end API integrations (REST, WebSocket) Implement and optimize data pipelines using Pandas, NumPy Use DSA to solve real-world performance-critical problems Handle database interaction (SQLite, PostgreSQL, or MongoDB)

Posted 1 month ago

Apply

3.0 - 6.0 years

9 - 19 Lacs

Hyderabad

Hybrid

Were looking for a Python-based AI/ML Developer who brings solid hands-on experience in building machine learning models and deploying them into scalable, production-ready APIs using FastAPI or Django. The ideal candidate is both analytical and implementation-savvy, capable of transforming models into live services and integrating them with real-world systems. Key Responsibilities Design, train, and evaluate machine learning models (classification, regression, clustering, etc.) Build and deploy scalable REST APIs for model serving using FastAPI or Django Collaborate with data scientists, backend developers, and DevOps to integrate models into production systems Develop clean, modular, and optimized Python code using best practices Perform data preprocessing, feature engineering, and data visualization using Pandas , NumPy , Matplotlib , and Seaborn Implement model serialization techniques (Pickle, Joblib, ONNX) and deploy models using containers (Docker) Manage API security with JWT and OAuth mechanisms Participate in Agile development with code reviews, Git workflows, CI/CD pipelines Must-Have Skills Python & Development Proficient in Python 3.x, OOP, and clean code principles Experience with Git, Docker, debugging, unit testing AI/ML Good grasp of supervised/unsupervised learning, model evaluation, and data wrangling Hands-on with Scikit-learn , XGBoost , LightGBM Web Frameworks FastAPI : API routes, async programming, Pydantic, JWT Django : REST Framework, ORM, Admin panel, Middleware DevOps & Cloud Experience with containerized deployment using Docker Exposure to cloud platforms: AWS , Azure , or GCP CI/CD with GitHub Actions, Jenkins, or GitLab CI Databases SQL: PostgreSQL, MySQL NoSQL: MongoDB, Redis ORM: Django ORM, SQLAlchemy Bonus/Nice-to-Have Skills Model tracking/versioning tools (MLflow, DVC) Knowledge of LLMs , transformers, vector DBs (Pinecone, Faiss) Airflow, Prefect, or other workflow automation tools Basic frontend skills (HTML, JavaScript, React) Requirements Education : B.E./B.Tech or M.E./M.Tech in Computer Science, Data Science, or related fields Experience : 36 years of industry experience in ML development and backend API integration Strong communication skills and ability to work with cross-functional teams Role & responsibilities

Posted 1 month ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Hyderabad

Hybrid

Responsibilities and Duties Add support for new platforms to our existing products and develop new products. Develop and review designs, code, unit tests, system tests, and documentation. Collaborate in root cause analysis; diagnose, isolate, and fix software problems. Create backend applications using primarily Python Demonstrate your work product to your team. Identify and correct issues that impact performance, reliability, and scalability. Investigate and develop skills in new technologies. Characteristics Extensive knowledge of Python for asynchronous, backend application development Working knowledge of the software development lifecycle to include agile methodologies, code quality, and continuous integration/continuous delivery. Driven to build modern systems that emphasize user performance and scalibility A team player, who sees software quality as your responsibility Excellent writing and written/verbal communication skills. An eagerness to learn, explore and introduce new technologies. On-call shifts may be required Education & Experience 8+ years work experience in software engineering with considerable experience programming in Python (or similar object-oriented language) with a focus on asynchronous programming Experience with API development, and ideally data ingestion Prior work on distributed systems, and event-driven architecture knowledge is a big plus and will be very helpful on your day to day. Experience with Docker and Jenkins (or similar CI toolset) Dedication to contributing unit tests and other testware with product code. Experience consuming RESTful interfaces and implementing security good practices Familiarity with NoSQL databases and ElasticSearch/OpenSearch, and knowledge of cloud computing platforms is a plus

Posted 1 month ago

Apply

2.0 - 6.0 years

2 - 7 Lacs

Coimbatore

Work from Office

Responsibilities: Design, develop, test & maintain Python applications using Django/Flask frameworks on AWS cloud Platform. Collaborate with cross-functional teams to deliver high-quality software solutions. Need to Know Rest APIs, GitHub/BitBucket. Health insurance Provident fund Food allowance

Posted 1 month ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Gurugram

Remote

Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.

Posted 1 month ago

Apply

5.0 - 8.0 years

18 - 22 Lacs

Bengaluru

Work from Office

Hiring a Python Developer with 5+ years of experience and proven expertise in the BFSI sector. Must have strong skills in Python, Django/Flask, SQL, and APIs. Experience with data pipelines, ETL tools, and cloud is a plus. BFSI background is a must. Required Candidate profile Python developer with strong BFSI experience, proficient in Pandas, NumPy, SQLAlchemy, and Spark. Skilled in Django/Flask, SQL/NoSQL databases, ETL tools, APIs, and containerized environments.

Posted 1 month ago

Apply

5.0 - 8.0 years

18 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Hiring a Python Developer with 5+ years of experience and proven expertise in the BFSI sector. Must have strong skills in Python, Django/Flask, SQL, and APIs. Experience with data pipelines, ETL tools, and cloud is a plus. BFSI background is a must. Required Candidate profile Python developer with strong BFSI experience, proficient in Pandas, NumPy, SQLAlchemy, and Spark. Skilled in Django/Flask, SQL/NoSQL databases, ETL tools, APIs, and containerized environments.

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Thiruvananthapuram

Work from Office

Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 17 Lacs

Chennai

Work from Office

Key Responsibilities : Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB, etc.). Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Perform data manipulation and analysis using Pandas, NumPy, and related Python libraries Develop and maintain high-performance REST APIs using FastAPI or Flas Ensure data integrity, quality, and availability across various sources Integrate data workflows with application components to support real-time or scheduled processes Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Communicate technical concepts effectively to non-technical stakeholders Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. Expertise in data modeling and design (relational, dimensional, NoSQL). Proven experience with ETL/ELT processes, data lakes, and modern lake house architectures. Solid experience with SQL Server or any major RDBMS; ability to write complex queries and stored procedures 3+ years of experience with Azure Data Factory, Azure Databricks, and PySpark Strong programming skills in Python, with solid understanding of Pandas and NumPy Proven experience in building REST APIs Good knowledge of data formats (JSON, Parquet, Avro) and API communication patterns Strong knowledge of data governance, security, and compliance frameworks. Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). Familiarity with BI and analytics tools such as Power BI or Tableau. Strong problem-solving skills and attention to performance, scalability, and security Excellent communication skills both written and verbal Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership.

Posted 1 month ago

Apply

2.0 - 3.0 years

2 - 6 Lacs

Chennai

Work from Office

Required Skills: Strong verbal and written communication skills in English Strong analytical mindset and problem-solving abilities Ability to monitor and audit data quality and critique content. Experience creating data dashboards, graphs and visualizations Ability to manipulate, analyse and interpret complex datasets relating to focus markets. Ability to handle a variety of ongoing projects and triage incoming tasks 3+ years experience of working with SQL - PLSQL with advanced knowledge 2+ years of experience in Python , data manipulation using Pandas. 3+ Years experience working in a BI or Data Analytics field 3+ years experience of visualising data with at least one of the following BI tools - AWS Quicksight, Power BI, Tableau. In addition, the following experience would be desirable: Knowledge of database management systems, including some data engineering. Knowledge of statistics and statistical analysis. Knowledge of ETL and workflow orchestration using Apache Airflow (DAGs) Knowledge of data modelling tools like DBT. Responsibilities : The job roles and responsibilities are, Present data and dashboards that bring value to the business, always in a format that is appropriate for the intended audience. Utilize strong database skills working with large, complex datasets. Identify patterns and trends in data, working alongside teams within operations, the wider business and the senior management team to provide insight. Filter and cleanse unstructured (or ambiguous) data into usable datasets that can be analysed to extract insights and improve business processes

Posted 1 month ago

Apply

0.0 - 3.0 years

9 - 12 Lacs

Hyderabad

Work from Office

As an AI Engineering Intern , you will work closely with our AI research and development team to build, test, and deploy machine learning models and AI solutions. You will gain hands-on experience with various AI techniques and technologies, helping to develop and improve AI-powered systems. Responsibilities: Assist in the development and optimization of machine learning models and algorithms. Support data preprocessing, cleaning, and analysis for AI-related projects. Collaborate with the AI team to implement and integrate AI solutions into production systems. Contribute to the design and development of AI systems, including NLP, computer vision, or other domains based on project needs. Help in writing clean, scalable, and well-documented code for AI applications. Participate in the testing and validation of AI models, and identify areas for improvement. Stay up-to-date with the latest advancements in AI and machine learning technologies. Qualifications: Currently pursuing a degree in Computer Science, Engineering, Mathematics, or a related field (preferably at the undergraduate or graduate level). Solid understanding of machine learning concepts and algorithms (e.g., supervised learning, unsupervised learning, deep learning, etc.). Familiarity with programming languages such as Python, R, or similar. Experience with machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) is a plus. Strong problem-solving skills and analytical thinking. Ability to work independently as well as part of a collaborative team. Good communication skills, with the ability to present ideas and technical concepts clearly. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with data wrangling and data visualization tools (e.g., Pandas, Matplotlib, Seaborn). Knowledge of advanced AI topics such as reinforcement learning, generative models, or NLP. Exposure to version control systems (e.g., Git). Benefits: Mentorship from experienced AI engineers. Hands-on experience with state-of-the-art AI technologies. Opportunity to contribute to real-world AI projects.

Posted 1 month ago

Apply

0.0 - 1.0 years

1 - 3 Lacs

Hyderabad

Work from Office

Python Full Stack Developer Intern (Onsite Gachibowli, Hyderabad) Location: Onsite Vasavi Skycity, Gachibowli, Hyderabad Timings: US Shift (Night Shift) Stipend: 10,000 to 15,000 per month Cab Facility: Not Provided Internship Type: Full-time, Onsite Only Eligibility: Passed out in 2024 or before (2025 pass-outs will not be considered) Duration : 6 Months About the Role: We are looking for a highly skilled and self-driven Python Full Stack Developer Intern with strong hands-on coding abilities and in-depth technical understanding. This is not a training role we need contributors who can work independently on real-time development projects. Must-Have Requirements: Graduation Year: 2024 or before only Must have developed and deployed at least one complete dynamic web application or website independently (not an academic project) Deep technical knowledge in both frontend and backend development Excellent coding and debugging skills must be comfortable writing production-grade code Must be able to work independently without constant guidance Willingness to work onsite and during US hours Technical Skills Required: Backend: Python (Django / Flask / FastAPI) Frontend: HTML, CSS, JavaScript, React or Angular Database: PostgreSQL / MySQL / MongoDB Version Control: Git & GitHub Understanding of RESTful APIs , Authentication , and Security Practices Experience with deployment (Heroku, AWS, etc.) is a plus Nice-to-Have Skills: Knowledge of Docker, CI/CD pipelines Familiarity with cloud services (AWS/GCP) Exposure to Agile/Scrum methodology What You’ll Do: Work on real-time development projects from scratch Write clean, maintainable, and scalable code Collaborate with remote teams during US timings Independently handle assigned modules/features Continuously learn and adapt to new technologies Note: Academic/college projects will NOT be considered. Candidates must be able to show at least one independently built dynamic web app or website (with codebase and/or live demo).

Posted 1 month ago

Apply

9.0 - 14.0 years

11 - 21 Lacs

Pune, Chennai, Bengaluru

Work from Office

Hi, Urgent opening for AWS Python Data Engineer-Manager with EY GDS at Pan India Location. EXP :9-14Yrs Location: All GDS Mode: Hybrid Prefer Immediate joinees with 0-30 days of NP Mandatory Skills: Machine Learning models Flask Rest API Pandas, Numpy, AWS / Azure Python Please apply if Available for Virtual interview on 28th June 2025. Manager (9-14Yrs): https://careers.ey.com/job-invite/1600818/

Posted 1 month ago

Apply

9.0 - 14.0 years

30 - 40 Lacs

Hyderabad

Hybrid

Position Overview: We are seeking an experienced Backend Developer proficient in Python, Flask, FastAPI, and related technologies with a deep understanding of algorithm design for complex tasks. As part of our backend engineering team, you will play a key role in designing, developing, and maintaining scalable and reliable backend services for our AI coaching platform. Your expertise in microservices architecture, cloud computing, and database management will be instrumental in shaping the future of our technology stack. Responsibilities: Design, develop, and maintain RESTful APIs and backend services using Python, Flask, FastAPI, and SQLAlchemy, adhering to best practices for code quality, performance, and scalability. Implement microservices architecture, for smaller, independent services, and orchestrate communication between services using message brokers or API gateways. Implement complex algorithms and data structures to handle diverse tasks such as data processing, operation research (OR), recommendation systems, and optimization problems. Optimize backend services for performance and efficiency, identifying bottlenecks and implementing solutions to improve response times and resource utilization. Collaborate with frontend developers, data scientists, and DevOps engineers to integrate backend services with web and mobile applications, AI models, and cloud infrastructure. Implement authentication and authorization mechanisms, ensuring secure access control to backend resources and protecting sensitive data using industry-standard encryption and authentication protocols. Utilize cloud computing platforms such as Google Cloud Platform (GCP) to deploy and scale backend services, leveraging managed services like Cloud Functions, Cloud Run, and Kubernetes Engine for optimal performance and cost efficiency. Containerize backend services using Docker and orchestration tools like Kubernetes for deployment and management in containerized environments, ensuring consistency and reproducibility across development, staging, and production environments. Design and optimize database schemas using PostgreSQL or MySQL, leveraging advanced features for scalability, performance, and data integrity, and integrating data processing libraries like Pandas and NumPy for advanced analytics and machine learning tasks. Document API specifications using OpenAPI (formerly Swagger), defining endpoints, request/response schemas, and authentication requirements for internal and external consumption. Qualifications: Bachelor's or Master's degree in Computer Science, Software Engineering, or related field. Extensive experience in backend development with Python, including frameworks like Flask and FastAPI, and proficiency in database management with SQLAlchemy. Strong understanding of microservices architecture principles and experience designing, implementing, and deploying microservices-based applications. Strong understanding of algorithmic complexity, optimization techniques, and best practices for designing efficient algorithms to solve complex problems. Hands-on experience with cloud computing platforms, preferably Google Cloud Platform (GCP), and familiarity with cloud-native technologies such as serverless computing, containers, and orchestration. Proficiency in containerization and orchestration tools like Docker and Kubernetes for building and managing scalable, distributed systems. Solid understanding of relational database management systems (RDBMS) such as PostgreSQL or MySQL, with experience optimizing database schemas for performance and scalability. Familiarity with data processing libraries like Pandas and NumPy for advanced analytics and machine learning tasks. Experience with API documentation tools like OpenAPI/Swagger for defining and documenting RESTful APIs. Excellent problem-solving skills, attention to detail, and ability to work effectively in a collaborative, cross-functional team environment. Strong communication skills, with the ability to articulate technical concepts and collaborate with stakeholders across disciplines. Passion for sports and a desire to make a positive impact on athlete performance and well-being. BenefitRole & responsibilities

Posted 1 month ago

Apply

1.0 - 3.0 years

7 - 10 Lacs

Mumbai Suburban, Mumbai (All Areas)

Work from Office

Role & responsibilities Conduct relevant statistical tests to validate hypotheses, perform data cleaning, and run relevant statistical tests to validate findings. Perform Exploratory Data Analysis (EDA) to identify patterns, anomalies, and opportunities. Perform data fusion activities by leveraging techniques to merge, reconcile, and analyze information from disparate systems and formats. Identify and employ modern weighing and projection methods to answer key business questions and predict future trends. Support data visualizer with necessary data for real-time data visualization. Collaborate with product and research teams by providing feedback based on analytical findings and maintain daily MIS reports. Preferred candidate profile MSc in Statistics or a related quantitative field. Experience working with app-based data is preferred. Proficiency in Python is a must with exposure to libraries used for numerical and text analysis such as Pandas, Numpy, PySpark, NLTK, SpaCy, Scikit-Learn, Genism, etc. Expertise in MS Excel and dashboard creation to complement BI tools and automate reporting tasks. Strong analytical mindset to interpret complex data, identify trends, and provide actionable insights. Understanding of the business context to translate data insights into relevant recommendations and feedback for product and research teams. Benefits Competitive salary and benefits package. Opportunity to make significant contributions for a dynamic company. Evening snacks are provided by the company to keep you refreshed towards the end of the day. Walking distance from Chakala metro station, making commuting easy and convenient. At Axis My India, we value discipline and focus. Our team members wear uniforms, adhere to a no-mobile policy during work hours, and work from our office with alternate Saturdays off. If you thrive in a structured environment and are committed to excellence, we encourage you to apply.

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Hyderabad

Work from Office

1. Strong Python knowledge 2. Hands on knowledge with SQL queries(any database Postgres, MySQL, Oracle etc.) 3. Good understanding of Automated unit testing process in Python echo system 4. Need to prepare technical design documents as process

Posted 1 month ago

Apply

6.0 - 8.0 years

18 - 22 Lacs

Bangalore Rural, Chennai, Bengaluru

Work from Office

Senior Python ETL Developer/Lead, 5+ years of exp in ETL development using Python,Apache Airflow, PySpark, and Pandas,Oracle SQL, PL/SQL,UNIX , Windows environments,OOAD, SOA,data warehousing concepts, data modeling, and data integration.

Posted 1 month ago

Apply

1.0 - 3.0 years

2 - 2 Lacs

Viluppuram

Work from Office

1–3 years of experience in full-stack development using Python.Proficiency in Backend,Front end,Databases,Pompt Engineering,Devops,Testing

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies