Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
15 - 30 Lacs
pune, gurugram, bengaluru
Hybrid
Role - Data Analyst (Pyspark/SQL) Location - Bengalore/Gurgaon/Pune Type - Hybrid Position - Full Time We are looking a Data Analyst who has strong expertise into PySpark & SQL Roles and Responsibilities Develop expertise in SQL queries for complex data analysis and troubleshooting issues related to data extraction, manipulation, transformation, mining, processing, wrangling, reporting, modeling, classification. Desired Candidate Profile 4-9 years of experience in Data Analytics with a strong background in PySpark programming language.
Posted Date not available
4.0 - 9.0 years
7 - 17 Lacs
noida, gurugram
Work from Office
This role is ideal for someone with strong expertise in SQL, Python, Hadoop/Oracle and Pyspark , and a solid background in data analysis and wrangling . You will play a key role in building and maintaining analytical solutions that drive insights, especially in the area of fraud detection within card transactions and account opening domains . Key Responsibilities: Develop, maintain, and optimize complex SQL queries to support data extraction and transformation activities. Utilize Python for advanced data processing, automation, and analytical modeling. Build and maintain Tableau dashboards that provide actionable insights to stakeholders. Perform data wrangling and cleansing from various sources to prepare data for analysis. Conduct detailed data analysis to uncover trends, patterns, and potential fraudulent activity. Collaborate with fraud analytics and operations teams to develop and enhance fraud detection frameworks. Document data processes and maintain data integrity across workflows. Contribute to continuous improvement in data pipelines, tools, and reporting mechanisms. Required Skills and Qualifications: Proven experience in writing efficient and complex SQL queries for large datasets. Strong programming skills in Python , with knowledge of data libraries such as Pandas, NumPy, etc. Hands-on experience building visualizations and dashboards using Tableau . Experience with data wrangling , cleaning, and transformation techniques. Solid understanding of data analysis methodologies and problem-solving using data. Excellent communication skills and ability to present technical findings to non-technical stakeholders. Attention to detail and commitment to delivering high-quality work.
Posted Date not available
4.0 - 9.0 years
7 - 17 Lacs
hyderabad, pune, bengaluru
Work from Office
This role is ideal for someone with strong expertise in SQL, Python, Hadoop/Oracle and Pyspark , and a solid background in data analysis and wrangling . You will play a key role in building and maintaining analytical solutions that drive insights, especially in the area of fraud detection within card transactions and account opening domains . Key Responsibilities: Develop, maintain, and optimize complex SQL queries to support data extraction and transformation activities. Utilize Python for advanced data processing, automation, and analytical modeling. Build and maintain Tableau dashboards that provide actionable insights to stakeholders. Perform data wrangling and cleansing from various sources to prepare data for analysis. Conduct detailed data analysis to uncover trends, patterns, and potential fraudulent activity. Collaborate with fraud analytics and operations teams to develop and enhance fraud detection frameworks. Document data processes and maintain data integrity across workflows. Contribute to continuous improvement in data pipelines, tools, and reporting mechanisms. Required Skills and Qualifications: Proven experience in writing efficient and complex SQL queries for large datasets. Strong programming skills in Python , with knowledge of data libraries such as Pandas, NumPy, etc. Hands-on experience building visualizations and dashboards using Tableau . Experience with data wrangling , cleaning, and transformation techniques. Solid understanding of data analysis methodologies and problem-solving using data. Excellent communication skills and ability to present technical findings to non-technical stakeholders. Attention to detail and commitment to delivering high-quality work.
Posted Date not available
8.0 - 13.0 years
12 - 16 Lacs
chennai
Work from Office
Chennai, India Python BCM Industry 01/05/2025 Project description We need a Senior Python and Pyspark Developer to work for a leading investment bank client. Responsibilities Develop software applications based on business requirements Maintain software applications and make enhancements according to project specifications Participate in requirement analysis, design, development, testing, and implementation activities Propose new techniques and technologies for software development. Perform unit testing and user acceptance testing to evaluate application functionality Ensure to complete the assigned development tasks within the deadlines Work in compliance with coding standards and best practices Provide assistance to Junior Developers when needed. Perform code reviews and recommend improvements. Review business requirements and recommend changes to develop reliable applications. Develop coding documentation and other technical specifications for assigned projects. Act as primary contact for development queries and concerns. Analyze and resolve development issues accurately. Skills Must have 8+ years of experience in data intensive Pyspark development. Experience as a core Python developer. Experience developing Classes, OOPS, exception handling, parallel processing . Strong knowledge of DB connectivity, data loading , transformation, calculation. Extensive experience in Pandas/Numpy dataframes, slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap, apply, map functions. Concurrency and error handling data pipeline batch of size [1-10 gb]. Ability to understand business requirements and translate them into technical requirements. Ability to design architecture of data pipeline for concurrent data processing. Familiar with creating/designing RESTful services and APIs. Familiar with application unit tests. Working with Git source control Service-orientated architecture, including the ability to consider integrations with other applications and services. Debugging application. Nice to have Knowledge of web backend technology Django, Python, PostgreSQL. Apache Airflow Atlassian Jira Understanding of Financial Markets Asset Classes (FX, FI, Equities, Rates, Commodities & Credit), various trade types (OTC, exchange traded, Spot, Forward, Swap, Options) and related systems is a plus Surveillance domain knowledge, regulations (MAR, MIFID, CAT, Dodd Frank) and related Systems knowledge is certainly a plus Other Languages EnglishC2 Proficient Seniority Senior
Posted Date not available
9.0 - 14.0 years
35 - 45 Lacs
mumbai, hyderabad, chennai
Work from Office
Dear Aspirant , Greetings from TCS ! TCS presents excellent opportunity for Data Scientist / ML Engineer Exp: 8 - 16 Years Job Location: Chennai / Hyderabad / Mumbai Programming - Python, Machine Learning, Data wrangling , Strong Statistical & Mathematical skills Hands-on experience in GenAI (LLMs, Prompting & RAG) based solutions Hands-on experience in handling various storages RDBMs, Vector DB, File storage & Cloud storages services [ Azure or AWS or GCP ] Strong knowledge in Data Engineering Extraction, Pre-processing and managing various types of data (structured, semi & un-structured data) Regards, Sai Lokesh S
Posted Date not available
5.0 - 10.0 years
11 - 15 Lacs
noida
Work from Office
1. Strong experience in data preparation/data wrangling for data on acquired/import connections. 2. Strong experience in data analyzer. 3. Proficient in SAP SAC functionalities, including data integration, data modeling, story creation, and advanced analytics. 4. Expertise / Knowledge of SAC predictive capabilities, augmented analytics, forecasting, Smart Assist, Smart Predict, Smart Discovery, creating story canvasses, advanced story designing, blending data, formulas, cross calculations, input controls, linked analysis. 5. Understanding of data analytics concepts and best practices and Creation of SAC planning masks, workflows, planning calendars etc. 6. Solid understanding in Filters ; Static and Dynamic . 7. Strong experience in creating geo maps using geo enrichment feature for geospatial analysis and creating Planning models in SAC. 8. Design and implementation of authorizations/roles 9. Create functional designs for reporting to enable developers to build them in SAP BW4HANA/SAC. 10.Around 5-7 Years of experience in Supporting/Development of SAC reports.
Posted Date not available
5.0 - 10.0 years
13 - 20 Lacs
pune
Work from Office
Develop/Upgrade ML models.New Dev&Features. Data analysis & accuracy, Improve product dev through data science insights.Customer value assessment.Develop algorithms & models to convert raw data into actionable alerts for farmers/customers.Lead team Required Candidate profile Masters in Data Science/ PHD in Data Science. BE / BTech. Certified in Data Science from good institute. Good articulative skills.Good confidence. Good interpretation skills. Leadership. Multitasking.
Posted Date not available
10.0 - 16.0 years
25 - 35 Lacs
pune
Work from Office
Job Title : Data Modelling SME Lead About the role: As the Data Modelling Senior SME you will collaborate with business partners across Digital Operational Excellence, Technology, and Castrols PUs, HUBs, functions, and Markets to model and sustain curated datasets within the Castrol Data ecosystem. The role ensures agile, continuous improvement of curated datasets aligned with the Data Modelling Framework and supports analytics, data science, operational MI, and the broader Digital Business Strategy. On top of the Data lake we now have enabled the MLOPS environment (PySpark Pro) and Gurobi with direct connections to run the advance analytics and data science queries and algorithms written in python. This enables the data analyst and data science team to incubate insights in an agile way. The Data Modeller role will chip in and enable the growth trajectory on data science skills and capabilities within the role, the team and the wider Castrol data analyst/science community, data science experience is a plus but basic skills would suffice to start. Experience & Education: Education: Degree in an analytical field (preferably IT or engineering) Experience: Overall exp of 10 years; 5+ years of relevant experience Proven track record in delivering data models and curated datasets for major transformation projects. Broad understanding of multiple data domains and their integration points. Strong problem-solving and collaborative skills with a strategic approach. Skills & Competencies: • Expertise in data modeling, data wrangling of highly complex, high-dimensional data (ER Studio, Gurobi, SageMaker PRO). • Proficiency in translating analytical insights from high-dimensional data. • Skilled in PowerBI data modeling and proof of concept design for data and analytics dashboarding. • Proficiency in Data Science tools such as Python, Amazon SageMaker, GAMS, AMPL, ILOG, AIMMS, or similar. • Ability to work across multiple levels of detail, including Analytics, MI, statistics, data, process design principles, operating model intent, and systems design. • Strong influencing skills to use expertise and experience to shape value delivery. • Demonstrated success in multi-functional deployments and performance optimization. BP Behaviors for Successful Delivery: Respect: Build trust through clear relationships Excellence : Apply standard processes and strive for executional completion One Team: Collaborate to improve team efficiency
Posted Date not available
8.0 - 13.0 years
25 - 35 Lacs
pune
Remote
Job Summary: We are seeking a highly skilled Senior Data Engineer with 8-10 years of experience in building scalable data solutions and a solid understanding of AI/ML workflows. The ideal candidate will have expertise in data engineering , cloud platforms , and machine learning integration , and will work closely with data scientists, ML engineers, and software developers to operationalize intelligent data systems. Key Responsibilities: Design and implement data pipelines for structured, semi-structured and unstructured data from diverse sources (real-time and batch). Develop and maintain data lakes, data warehouses, and feature stores for scalable ML model training and inference. Collaborate with AI/ML teams to build model-ready datasets , automate feature extraction, and streamline MLOps workflows. Optimize data systems for performance, scalability, security , and cost-efficiency . Build and manage ETL/ELT processes using tools like Apache Airflow , dbt , or custom Python jobs. Integrate real-time streaming data using Kafka, Spark Streaming, or Flink for use cases like fraud detection, recommendation engines, etc. Work with cloud services (AWS/GCP/Azure) to provision infrastructure and deploy production-grade systems. Implement data governance, quality checks, and monitoring to ensure trust in data pipelines. Collaborate in cross-functional teams using Agile methodologies and DevOps best practices. Required Skills & Qualifications: Core Data Engineering: 5+ years in data engineering roles , designing complex pipelines and data architectures. Strong command of Python and SQL for ETL, data Wrangling, and scripting. Experience with Apache Spark , PySpark , Kafka , or Flink . Hands-on with data orchestration tools like Airflow , Dagster , or Prefect . Proficient in data modeling , partitioning , and schema design for analytics and ML. Cloud & DevOps: 3+ years working on cloud platforms : AWS (S3, Redshift, EMR, Glue), GCP (BigQuery, Dataflow), or Azure (ADF, Synapse). Experience with Terraform , CloudFormation , or infrastructure-as-code is a plus. Familiar with containerization using Docker and CI/CD for data pipelines. AI/ML Integration: Familiar with ML lifecycle , including feature engineering, model versioning, and deployment. Worked with tools like MLflow , Kubeflow , Vertex AI , or SageMaker . Exposure to AI frameworks like TensorFlow , PyTorch , or scikit-learn . Experience working with vector databases (e.g., Pinecone, FAISS, Weaviate) for semantic search is a plus. Other Skills: Excellent understanding of data privacy, security, and compliance Strong communication skills to collaborate with business and technical stakeholders. Problem-solving mindset with attention to performance and reliability. Preferred Qualifications: Bachelor's or Master's in Computer Science, Engineering, or Data-related field. Certifications in cloud platforms (AWS/GCP/Azure) or Databricks . Published work, contributions to open-source, or AI/data blogs are a plus.
Posted Date not available
3.0 - 6.0 years
13 - 17 Lacs
chennai
Work from Office
What you’ll do Design, build, and manage data workflows and pipelines in Dataiku DSS . Integrate data from multiple sources including AWS, databases, APIs, and flat files. Collaborate with data engineering and business teams to translate requirements into scalable data solutions. Implement data validation, error handling, and monitoring processes within Dataiku. Support model deployment, scheduling, and performance optimization within the Dataiku environment. Maintain documentation and version control for Dataiku projects and pipelines. What you’ll bring 3+ years of experience in data engineering or analytics development roles. Hands-on experience with Dataiku DSS , SQL, Python, and data wrangling. Familiarity with AWS services, APIs, and data integration tools. Understanding of data quality, pipeline performance optimization, and analytics workflows. Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams
Posted Date not available
5.0 - 8.0 years
6 - 10 Lacs
hyderabad, chennai
Work from Office
Job Title:AI/ML Engineer Experience: 5-8 Years Location:Chennai , Hyderabad Job Description : Application Development:Design, develop, and deploy applications utilizing Large Language Models (LLMs) like OpenAI GPT. MLOps Pipeline Management:Build and maintain Machine Learning Operations (MLOps) pipelines, encompassing data ingestion, Chunking, vectorization, feature engineering, model training, deployment, and monitoring. Cloud Platform Utilization:Leverage cloud platforms such as AWS, GCP, or Azure for machine learning model development, training, and deployment. Workflow Automation:Implement DevOps/MLOps/LLMOps best practices to automate machine learning workflows and enhance efficiency. Performance Monitoring:Develop and implement monitoring systems to track model performance and identify issues. Collaboration:Work closely with data scientists, engineers, and product teams to deliver machine learning solutions. AI/ML Key Responsibilities: Design, develop, and deploy scalable AI/ML models using Python and associated libraries (TensorFlow,PyTorch, Scikit-learn, etc.). Collaborate with data scientists, engineers, and product managers to define and prioritize AI-driven solutions to business problems. Implement and optimize algorithms for classification, regression, clustering, and deep learning models. Build and maintain end-to-end machine learning pipelines, from data preprocessing to model deployment in production environments (cloud and on-premise). Perform model evaluation, tuning, and validation to ensure optimal performance and generalization. Integrate machine learning models with existing software applications or frameworks, ensuring seamless operation and efficiency. Utilize MLOps tools to automate model versioning, monitoring, and retraining for continuous improvement. Mentor junior engineers and data scientists, promoting knowledge sharing and development best practices. Stay updated on the latest advancements in AI/ML and integrate new tools and techniques into current processes. Conduct research and experimentation to solve complex problems and improve current AI models and algorithms. Required Skills and Qualifications: 4+ years of hands-on experience in AI/ML development, including real-world deployment of machine learning models. Expert-level proficiency in Python programming and associated AI/ML libraries such as TensorFlow,PyTorch, Keras, and Scikit-learn. Strong understanding of deep learning, neural networks, natural language processing (NLP), computer vision, and traditional machine learning algorithms. Experience in deploying AI/ML models in production environments using cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes). Proficient in working with large datasets, data wrangling, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Knowledge of database systems (SQL, NoSQL) and experience with distributed computing frameworks like Spark. Strong problem-solving skills and the ability to translate business requirements into technical solutions. Familiarity with MLOps best practices, including CI/CD pipelines, model monitoring, version control, and automated retraining. Excellent written and verbal communication skills and the ability to work in a collaborative, agile environment. Experience with tools like Git, Jupyter Notebooks, and CI/CD tools (Jenkins, CircleCI).
Posted Date not available
7.0 - 12.0 years
20 - 35 Lacs
pune
Remote
Skills Required : Machine Learning, Deep Learning , Artificial Intelligence Statistical Analysis Data mining, Data wrangling Python (Numpy, Pandas, sklearn,- Tenser flow, Computer vision) Jupiter/ Spider Matplotlib, Seaborn Azure Cloud, MLOps SQL Server, Oracle, PostgreSQL ETL, Erwin/ PowerBI Hadoop, Spark, Hive, GitHub Agile, SDLC Responsibilities : Required well qualified Data Scientist who has good experienced with vast data sets to break down information. Skilled in predictive modelling, data mining and hypothetical testing. Technical leadership & consultancy, hands-on Development and Delivery, client engagement & people management in field of data science. Build robust and scalable data infrastructure software Design and create services and system architecture for your projects Deployment of ML models. Post deployment validation/metrics & verified data drifts. Effective presentations using PPTs, Excel, PowerBI reports. Build out the production and test infrastructure. Develop automation frameworks to reproduce issues and prevent regressions. Work closely with other developers providing services to our system. Help to analyze and to understand how customers use the product and improve it where necessary. Good knowledge of SQL Server Experienced in Machine Learning & AI
Posted Date not available
3.0 - 8.0 years
5 - 10 Lacs
bengaluru
Work from Office
Imagine what you could do here. At Apple, we believe new ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Do you want to impact the future of Manufacturing here at Apple through cutting edge ML techniquesThis position involves a wide variety of skills, innovation, and is a rare opportunity to be working on groundbreaking, new applications of machine-learning, research and implementation. Ultimately, your work would have a huge impact on billions of users across the globe. You can help inspire change, by using your skills to influence globally recognized products' supply chain.Apple s Manufacturing & Operations team is looking for an extraordinary Machine Learning Engineer with expertise in Computer Vision (CV) and Large Language Models (LLMs) to join our team. You will craft, design and implement our machine learning strategy to the massive iPhone, Mac, iPad supply chains and help build the future of our manufacturing systems. In this role, you will develop and deploy machine learning models to optimize processes, automate quality control, and improve operational efficiency. The role requires a candidate with practical experience fine-tuning LLMs and applying them in combination with CV techniques to tackle challenges in manufacturing environments. Description As a key member of our team, you ll collaborate with different engineering and operations teams leading development of ML solutions for a variety of vision and language-based tasks and projects. You will be responsible for delivering ML technologies aligned with the fast pace new technology and short product lifecycle, while ensuring the highest standards of product quality and reliability. In this role, you ll be embedded inside a vibrant team of machine learning engineers and data scientists. You ll be expected to help conceive, code, and deploy machine learning models at scale using the latest industry tools. Important skills include data wrangling, feature engineering, developing models, and testing metric. You will have the opportunity to work both independently and collaboratively to help partner teams meet predefined objectives. If you are passionate to influence the quality, speed and efficiency of our ML algorithms, come and help enable our vision to create the most refined products in the world.Join our team as a key contributor, leading the development of machine learning solutions for diverse tasks and projects. Collaborate with engineering and operations teams to deliver cutting-edge ML technologies, ensuring top-tier product quality and reliability in a fast-paced environment.- Develop and deploy scalable Computer Vision and Machine Learning algorithms on local and cloud-based inferencing platforms.- Perform rapid prototyping to design algorithms for challenging real world manufacturing problems in the domain of Intelligent Visual Inspection.- Leverage Large Language Models to automate document analysis, knowledge extraction, and process optimization within manufacturing workflow.- Fine-tune LLMs for domain-specific applications such as improving operational documentation, production reports, and automating technical analysis.- Work with resources in cross-functional teams and the factory to integrate ML applications.- Abilities to independently learn new technologies; prioritize tasks and take ownership; and meaningfully present results of analyses in a clear and impactful manner. Experience developing deep learning models such as CNNs, Vision Transformers, or YOLO for image-based tasks in production systems. Proven research and practical experience in developing algorithms for image processing, content-based video/image analysis, object detection, segmentation and tracking Proven experience in fine-tuning LLMs for domain-specific use cases such as document analysis and operational efficiency. Master s in computer science, Machine Learning, or higher level degree is preferred with of 3+ years of related industry experience in Machine Learning, Computer Science, Data Science or related fields. iOS CoreImage/CoreML and native App development experience is a big plus. Experience deploying ML models in cloud environments (AWS, GCP, or Azure) for scalable production use. Preferred Qualifications Strong grasp of deep learning principles in both Computer Vision and Natural Language Processing (NLP). Familiarity with LLM architectures like BERT, GPT, and experience fine-tuning these models for improved performance in manufacturing settings. Knowledge of machine learning and Deep Learning libraries such as PyTorch, OpenCV, Hugging Face, is essential. Proven ability to implement, improve, debug, and maintain machine learning models. Familiar with version control systems such as Git. Strong optimization and debugging skills. Self-motivated, responsible, excellent written and verbal interpersonal skills. Experience with handling and visualizing very large data sets and creating performance reports.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |