Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 5 years
20 - 35 Lacs
Pune
Work from Office
Key Responsibilities: Leverage traditional machine learning techniques (decision trees, random forests, SVM, logistic regression) to build predictive models. Analyze complex datasets to identify key trends and provide data-driven insights for business decisions. Partner with cross-functional teams to understand business needs and translate them into effective ML solutions. Rigorously evaluate and optimize models for accuracy and real-world performance. Apply statistical and mathematical principles to enhance analytical outcomes. Communicate findings and model insights to both technical and non-technical stakeholders. Desired Skills and Qualifications: Deep understanding of traditional machine learning algorithms (decision trees, clustering, Naive Bayes, SVM, etc.). Strong proficiency in mathematics and statistics (linear algebra, probability, statistical inference). Proven ability to translate business problems into analytical frameworks and actionable insights. Expertise in data processing, cleaning, and feature engineering using Python (pandas, NumPy). Hands-on experience building and deploying ML models using Python and libraries like scikit-learn. Excellent analytical, problem-solving, and critical thinking skills. Strong communication and collaboration skills to effectively convey technical results. Familiarity with ML tools and frameworks (Scikit-learn, XGBoost, LightGBM). Experience with model evaluation, validation, and optimization techniques. Business acumen and the ability to apply ML to solve real-world business challenges. Preferred Qualifications: Bachelors or Masters degree in Data Science, Statistics, Mathematics, Computer Science, or a related quantitative field. 3+ years of experience as a Data Scientist with a focus on machine learning. Demonstrated experience in driving business decisions through data analysis and modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for data science workflows.
Posted 1 month ago
3 - 5 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Project Role :Application Developer Project Role Description :Design, build and configure applications to meet business process and application requirements. Must have Skills :Google BigQuery, SSI: NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements :Key Responsibilities :1:Assists with the data platform blueprint and design, encompassing the relevant data platform components.2:Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.3:The Data Engineer performs tasks such as data modeling, data pipeline build, data lake build, scalable programming frameworks Technical Experience :1:Expert in Python - NO FLEX. Strong hands-on- knowledge in SQL - NO FLEX, Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills2:Exp with building solutions using cloud native services:bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes NO FLEX3:Pro with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline Professional Attributes :1:Good communication 2:Good Leadership skills and team handling skills 3:Analytical skills, presentation skills, ability to work under pressure 4:Should be able to work in shifts whenever required Educational Qualification:Additional Info : Qualification 15 years full time education
Posted 1 month ago
1 - 5 years
8 - 12 Lacs
Bengaluru
Work from Office
An AI Engineer at IBM is not just a job title - it's a mindset. You'll leverage the watsonx platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Collaboration and Project ManagementCollaborate with cross-functional teams, including data scientists, software engineers, and project managers, to ensure smooth execution and successful delivery of AI solutions. Effectively communicate project progress, risks, and dependencies to stakeholders. Customer Engagement and SupportAct as a technical point of contact for customers, addressing their questions, concerns, and feedback. Provide technical support during the solution deployment phase and offer guidance on AI-related best practices and use cases. Documentation and Knowledge SharingDocument solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and InnovationStay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Required Technical and Professional Expertise EducationBachelor's, Master's, or Ph.D. degree in Computer Science, Artificial Intelligence, Data Science or a related field. Technical Skills: Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindsetDemonstrate a growth mindset to understand clients' business processes and challenges Preferred technical and professional experience Preferred Technical and Professional Experience ExperienceProven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms
Posted 1 month ago
8 - 13 years
0 Lacs
Bengaluru
Hybrid
The role of this Data/AWS engineer will be to develop data pipelines for specialty instrument data and Gen AI processes, support the development of classification and prediction models, create and maintain dashboards to monitor data health, and set up and maintain services in AWS to deploy models, and collect results. These pipelines will be part of foundational emerging data infrastructure for the company. We seek someone with a growth mindset who is self-motivated, a problem solver, and someone energized by working at the nexus of leading-edge software and hardware development. This position will follow a hybrid model work approach( 3 days a week , Tuesday, Wednesday and Thursday working from GCC office, RMZ ecoworld, Bellandur, Bangalore). • Build data pipelines in AWS using S3, Lambda, IoT core, EC2, and other services. • Create and maintain dashboards to monitor data health. • Containerize models and deploy them to AWS • Build python data pipelines that can handle data frames and matrices, ingest, transform, and store data using pythonic code practices. • Maintain codebase: use OOP and/or FP best practices, write unit tests, etc. • Work with Machine Learning engineers to evaluate data and models, and present results to stakeholders in a manner understandable to non-data scientists. • Mentor and review code of other members of the team. Required Qualifications: Bachelors in computer science or related field and at least 8 years relevant work experience, or equivalent. • Solid experience in AWS services such as S3, EC2, Lambda, and IAM. • Experience containerizing and deploying code in AWS. • Proficient writing OOP and/or functional programming code in Python (e.g., numpy, pandas, scipy, scikit-learn). • Comfortable with Git version control, as well as BASH or command prompt. • Comfortable discovering and driving new capabilities, solutions, and data best practices from blogs, white papers, and other technical documentation. • Able to communicate results using meaningful metrics and visualizations to managers and stakeholders and receive feedback. Desired (considered a plus) Qualifications: Experience with C#, C++ and .NET. What We Offer: Hybrid role with competitive compensation, great benefits, and continuous professional development. • An inclusive environment where everyone can contribute their best work and develop to their full potential. • Reasonable adjustments to the interview process according to your needs.
Posted 1 month ago
15 - 20 years
13 - 18 Lacs
Bengaluru
Work from Office
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Google Cloud Platform Architecture Good to have skills : Google Cloud Machine Learning Services Minimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a seasoned Senior Manager – CCAI Architect with deep expertise in designing and delivering enterprise-grade Conversational AI solutions using Google Cloud's Contact Center AI (CCAI) suite. This role demands a visionary leader who can bridge the gap between business objectives and AI-driven customer experience innovations. As a senior leader, you will drive architectural decisions, guide cross-functional teams, and define the roadmap for scalable and intelligent virtual agent platforms. Roles & Responsibilities: Own the end-to-end architecture and solution design for large-scale CCAI implementations across industries. Define best practices, reusable frameworks, and architectural patterns using Google Dialogflow CX, Agent Assist, Knowledge Bases, and Gen AI capabilities. Act as a strategic advisor to stakeholders on how to modernize and transform customer experience through Conversational AI. Lead technical teams and partner with product, operations, and engineering leaders to deliver high-impact AI-first customer service platforms. Oversee delivery governance, performance optimization, and scalability of deployed CCAI solutions. Evaluate and integrate cutting-edge Gen AI models (LLMs, PaLM, Gemini) to enhance virtual agent performance and personalization. Enable and mentor architects, developers, and consultants on Google Cloud AI/ML tools and CCAI strategies. 10+ years of experience in enterprise solution architecture, with 4+ years in Google Cloud AI/ML and Conversational AI platforms. Deep expertise in Dialogflow CX, CCAI Insights, Agent Assist, Gen AI APIs, and GCP architecture.Strong leadership in managing large transformation programs involving AI chatbots, voice bots, and omnichannel virtual agents. Proven ability to engage with senior stakeholders, define AI strategies, and align technical delivery with business goals. Experience integrating AI solutions with CRMs, contact center platforms (e.g., Genesys, Five9), and backend systems. Certifications in Google Cloud (e.g., Professional Cloud Architect, Cloud AI Engineer) are a strong plus. Exceptional communication, thought leadership, and stakeholder management skills. Professional & Technical Skills: Must Have Skills: CCAI/Dialogflow CX hands on experience and generative AI understanding. Good To Have Skills: Cloud Data Architecture, Cloud ML/PCA/PDE Certification, Strong understanding of AI/ML algorithms, NLP and techniques. Experience with chatbot, generative AI models, prompt Engineering.V- Experience with cloud or on-prem application pipeline with production-ready quality. Additional Information: The candidate should have a minimum of 18+ years of experience in Google Cloud Machine Learning Services/Gen AI/Vertex AI/CCAI. The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. A 15-year full time education is required Qualification 15 years full time education
Posted 1 month ago
9 - 11 years
11 - 13 Lacs
Hyderabad
Work from Office
About The Role As a developer / senior developer in the data technologies team, you will be responsible for working on building and supporting applications / modules / packages (primarily Python focused but we also use other languages and technologies as appropriate and suitable for the problem at hand) independently as well as mentoring junior members in the team (doing code and design reviews). EducationB. Tech / M. Tech in Computer Science / ECE Required skills: Expert in Python & SQL, Python, Webscraping, Flask, Data Structures, Algorithms, NumPy, and Pandas Able to integrate multiple data sources and databases into one system Familiar with JSON and REST APIs - Strong knowledge of object-oriented and parallel programming techniques - Experience with test-driven development (TDD) Understanding of the threading limitations of Python, and multi-process architecture Familiar with JSON and REST APIs - Strong knowledge of object-oriented and parallel programming techniques programming in Python Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific platform Able to create database schemas that represent and support business processes Strong unit test and debugging skills Working on Security Master and Investment Banking or Asset Management experience is most Important
Posted 1 month ago
3 - 6 years
9 - 13 Lacs
Mohali, Gurugram, Bengaluru
Work from Office
Job Title: Data Engineer – Snowflake & Python About the Role: We are seeking a skilled and proactive Data Developer with 3-5 years of hands-on experience in Snowflake , Python , Streamlit , and SQL , along with expertise in consuming REST APIs and working with modern ETL tools like Matillion, Fivetran etc. The ideal candidate will have a strong foundation in data modeling , data warehousing , and data profiling , and will play a key role in designing and implementing robust data solutions that drive business insights and innovation. Key Responsibilities: Design, develop, and maintain data pipelines and workflows using Snowflake and an ETL tool (e.g., Matillion, dbt, Fivetran, or similar). Develop data applications and dashboards using Python and Streamlit. Create and optimize complex SQL queries for data extraction, transformation, and loading. Integrate REST APIs for data access and process automation. Perform data profiling, quality checks, and troubleshooting to ensure data accuracy and integrity. Design and implement scalable and efficient data models aligned with business requirements. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver actionable solutions. Implement best practices in data governance, security, and compliance. Required Skills and Qualifications: Experience in HR Data and databases is a must. 3–5 years of professional experience in a data engineering or development role. Strong expertise in Snowflake , including performance tuning and warehouse optimization. Proficient in Python , including data manipulation with libraries like Pandas. Experience building web-based data tools using Streamlit . Solid understanding and experience with RESTful APIs and JSON data structures. Strong SQL skills and experience with advanced data transformation logic. Experience with an ETL tool commonly used with Snowflake (e.g., dbt , Matillion , Fivetran , Airflow ). Hands-on experience in data modeling (dimensional and normalized), data warehousing concepts , and data profiling techniques . Familiarity with version control (e.g., Git) and CI/CD processes is a plus. Preferred Qualifications: Experience working in cloud environments (AWS, Azure, or GCP). Knowledge of data governance and cataloging tools. Experience with agile methodologies and working in cross-functional teams. Experience in HR Data and databases. Experience in Azure Data Factory
Posted 1 month ago
3 - 5 years
0 - 0 Lacs
Kochi
Work from Office
Location : Trivandrum / Kochi / Bangalore / Chennai Experience : 4 to 6 years Job Description : We are looking for an experienced Python Developer with a strong background in software development, machine learning, and data science to join our team. The ideal candidate will have 4+ years of experience in Python development , a solid understanding of data science , and a strong portfolio of projects. You will collaborate with cross-functional teams to design, build, and maintain applications and systems that leverage machine learning and data science techniques. Key Responsibilities : Design, develop, test, and maintain Python applications and systems. Leverage libraries and frameworks like NumPy , Pandas , Sci-kit Learn , TensorFlow , and PyTorch to build scalable and efficient solutions. Collaborate with data scientists and engineers to implement machine learning models and algorithms. Develop and maintain database systems, ensuring proper data flow and storage using SQL or NoSQL databases. Work with cloud platforms such as AWS , Google Cloud , or Azure to build and deploy applications. Solve complex problems by applying your knowledge of Python , data science , and machine learning . Participate in code reviews and maintain high standards for code quality. Work closely with cross-functional teams, ensuring seamless integration of systems and applications. Provide technical insights and suggestions to improve existing solutions and workflows. Must-Have Skills : 4+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science , Software Engineering , or a related field. Strong expertise in Python software development, including experience with libraries like NumPy , Pandas , Sci-kit Learn , TensorFlow , and PyTorch . Proficiency in database technologies such as SQL and NoSQL . Knowledge of data science and machine learning concepts and tools. Excellent problem-solving ability with solid communication and collaboration skills. A working understanding of cloud platforms such as AWS , Google Cloud , or Azure . Ability to write efficient, maintainable, and testable code. Good-to-Have Skills : Experience with version control systems like Git . Knowledge of Docker , Kubernetes , or other containerization technologies. Experience in developing RESTful APIs and integrating them with Python-based applications. Familiarity with DevOps practices and tools for continuous integration and deployment (CI/CD). Experience in big data technologies such as Hadoop , Spark , or Kafka . Familiarity with front-end technologies like JavaScript , HTML , or CSS . Understanding of Agile methodologies and experience working in Agile environments . Exposure to advanced machine learning techniques such as Deep Learning or Natural Language Processing (NLP). Required Skills Python,Backend Development,Aws
Posted 1 month ago
4 - 5 years
0 - 0 Lacs
Thiruvananthapuram
Work from Office
Location: Trivandrum, Kerala, India Experience: 4-5 years Key Responsibilities: Develop, test, and maintain scalable Python-based applications. Work with data science tools and libraries to implement ML models. Collaborate with cross-functional teams to design, build, and roll out solutions. Optimize performance and troubleshoot issues across the software stack. Required Skills & Qualifications: Strong expertise in Python with a solid portfolio of projects. Proficiency in libraries and frameworks such as NumPy, Pandas, Scikit-learn, TensorFlow, and PyTorch. Good understanding of SQL and NoSQL databases. Strong problem-solving abilities and excellent communication skills. Familiarity with cloud platforms such as AWS, Google Cloud, or Microsoft Azure. Sound knowledge of data science and machine learning concepts. Key Skills: Python, NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, SQL, NoSQL, AWS, Google Cloud, Azure Required Skills Python,Flask,Mysql
Posted 1 month ago
7 - 10 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities We are seeking an experienced and technically strong Machine Learning Engineer to design, implement, and operationalize ML models across Google Cloud Platform (GCP) and Microsoft Azure. The ideal candidate will have a robust foundation in machine learning algorithms, MLOps practices, and experience deploying models into scalable cloud environments. Responsibilities: Design, develop, and deploy machine learning solutions for use cases in prediction, classification, recommendation, NLP, and time series forecasting. Translate data science prototypes into production-grade, scalable models and pipelines. Implement and manage end-to-end ML pipelines using: Azure ML (Designer, SDK, Pipelines), Data Factory, and Azure Databricks Vertex AI (Pipelines, Workbench), BigQuery ML, and Dataflow Build and maintain robust MLOps workflows for versioning, retraining, monitoring, and CI/CD using tools like MLflow, Azure DevOps, and GCP Cloud Build. Optimize model performance and inference using techniques like hyperparameter tuning, feature selection, model ensembling, and model distillation. Use and maintain model registries, feature stores, and ensure reproducibility and governance. Collaborate with cloud architects, and software engineers to deliver ML-based solutions. Maintain and monitor model performance in production using Azure Monitor, Prometheus, Vertex AI Model Monitoring, etc. Document ML workflows, APIs, and system design for reusability and scalability. Primary Skills required (Must Have Expereince): 5 -7 years of experience in machine learning engineering or applied ML roles. Advanced proficiency in Python, with strong knowledge of libraries such as Scikit-learn, Pandas, NumPy, XGBoost, LightGBM, TensorFlow, PyTorch. Solid understanding of core ML concepts: supervised/unsupervised learning, cross-validation, bias-variance tradeoff, evaluation metrics (ROC-AUC, F1, MSE, etc.). Hands-on experience deploying ML models using: Azure ML (Endpoints, SDK), AKS, ACI Vertex AI (Endpoints, Workbench), Cloud Run, GKE Familiarity with cloud-native tools for storage, compute, and orchestration: Azure Blob Storage, ADLS Gen2, Azure Functions GCP Storage, BigQuery, Cloud Functions Experience with containerization and orchestration (Docker, Kubernetes, Helm). Strong understanding of CI/CD for ML, model testing, reproducibility, and rollback strategies. Experience implementing drift detection, model explainability (SHAP, LIME), and responsible AI practices.
Posted 1 month ago
2 - 5 years
3 - 8 Lacs
Chennai
Work from Office
JOB DESCRIPTION Title: Software Engineer - Data Science Full time Chennai Data Scientist Role Type: Location: Reports to: Summary of the profile: We are seeking a highly motivated and innovative Data Scientist with 3-5 years of experience to join our team, with a focus on leveraging Generative AI and Large Language Models (LLMs). The ideal candidate will be passionate about exploring the potential of these cutting-edge technologies to solve complex problems and create impactful solutions. You will collaborate with cross-functional teams to research, develop, and deploy generative AI models and LLM-powered applications. What youll do here: Explore and experiment with various generative AI models (e.g., GANs, VAEs, diffusion models) and LLMs (e.g., GPT, BERT, Llama). Fine-tune pre-trained LLMs for specific tasks and domains. Develop and implement generative models for tasks like data augmentation, content generation, and synthetic data creation. ¢ ¢ ¢ Prompt engineering, and prompt iteration. Collect, clean, and preprocess large text datasets for LLM training and evaluation. Perform natural language processing (NLP) tasks such as text classification, sentiment analysis, and entity recognition. ¢ ¢ Use LLMs to assist in data cleaning and data understanding. Develop and evaluate machine learning models, including those leveraging generative AI and LLMs. ¢ Evaluate model performance using relevant metrics and techniques, including those specific to LLMs (e.g., perplexity, BLEU score). ¢ ¢ Tune hyperparameters to optimize model performance. Create compelling visualizations and reports to communicate insights from generative AI and LLM experiments. ¢ ¢ Present findings to both technical and non-technical audiences. Collaborate with engineers, product managers, and researchers to define project requirements and deliver solutions. ¢ ¢ Communicate effectively with team members and stakeholders, both verbally and in writing. Stay up-to-date with the latest advancements in generative AI and LLMs. Copyright © RESULTICKS Solution Inc 1 ¢ ¢ Assist in deploying generative AI models and LLM-powered applications to production environments. Monitor model performance and retrain models as needed. What you will need to thrive: ¢ ¢ 3-5 years of professional experience in data science, with a growing interest in Generative AI and LLMs. Proficiency in Python and relevant libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers). ¢ ¢ ¢ ¢ ¢ ¢ Experience with NLP techniques and libraries. Understanding of machine learning algorithms and statistical concepts. Experience with data visualization tools (e.g., Matplotlib, Seaborn, Tableau, Power BI). Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience with Prompt engineering, version control systems like Git and cloud platforms like GCP, AWS or Azure is a plus Preferred Skills: ¢ ¢ ¢ ¢ ¢ ¢ Experience with fine-tuning LLMs for specific tasks. Experience with deploying LLMs in production environments. Knowledge of deep learning architectures relevant to generative AI and LLMs. Familiarity with vector databases. Experience with RAG (Retrieval Augmented Generation) architectures. Experience with the Langchain Library. Education: ¢ Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field. Core competencies: ¢ Communication, Analytical & Problem-Solving Skills Copyright © RESULTICKS Solution Inc 2 About RESULTICKS RESULTICKS is a fully integrated connected experience platform for Real Time Audience Engagement. RESULTICKS empowers brands to create data-driven communications to their audiences and drives business decisions through its award-winning, AI-powered big data cloud solution. Supported by the worlds first customer data blockchain, RESULTICKS unlocks incredible digital transformation possibilities by delivering the full 360 view of the customer through data consolidation and omnichannel orchestration. Built from the ground up by experts in marketing, technology, and business management, RESULTICKS enables brands to make a transformational leap to conversion- driven, growth-focused personalized engagement. With an advanced CDP at its core. RESULTICKS offers AI-powered omnichannel orchestration, complete analytics, next-best engagement, attribution at the segment-of-one level, and the worlds first marketing block chain. ¢ RESULTICKS has been named to the Gartners Magic Quadrant 2021 for Multichannel Marketing Hubs for five years in a row and has been awarded with the Microsoft Elevating Customer Experience with AI award. ¢ Headquartered in Singapore and New York City, RESULTICKS is one of the top global martech solutions servicing both B2B/B2B2C and B2C segments. RESULTICKS global presence includes the United States, India, Singapore, Indonesia, Malaysia, Vietnam, Thailand, and the Philippines. Watch video on RESULTICKS - https://www.youtube.com/watch?v=G_OwGy6unP8 Copyright © RESULTICKS Solution Inc 3
Posted 1 month ago
6 - 11 years
25 - 40 Lacs
Pune
Hybrid
Role Definition: Data Scientists focus on researching and developing AI algorithms and models. They analyse data, build predictive models, and apply machine learning techniques to solve complex problems. Skills: • Proficient: Languages/Framework: Fast API, Azure UI Search API (React) o Databases and ETL: Cosmos DB (API for MongoDB), Data Factory Data Bricks o Proficiency in Python and R o Cloud: Azure Cloud Basics (Azure DevOps) o Gitlab: Gitlab Pipeline o Ansible and REX: Rex Deployment o Data Science: Prompt Engineering + Modern Testing o Data mining and cleaning o ML (Supervised/unsupervised learning) o NLP techniques, knowledge of Deep Learning techniques include RNN, transformers o End-to-end AI solution delivery o AI integration and deployment o AI frameworks (PyTorch) o MLOps frameworks o Model deployment processes o Data pipeline monitoring Expert: (in addition to proficient skills) o Languages/Framework: Azure Open AI o Data Science: Open AI GPT Family of models 4o/4/3, Embeddings + Vector Search o Databases and ETL: Azure Storage Account o Expertise in machine learning algorithms (supervised, unsupervised, reinforcement learning) o Proficiency in deep learning frameworks (TensorFlow, PyTorch) o Strong mathematical foundation (linear algebra, calculus, probability, statistics) o Research methodology and experimental design o Proficiency in data analysis tools (Pandas, NumPy, SQL) o Strong statistical and probabilistic modelling skills o Data visualization skills (Matplotlib, Seaborn, Tableau) o Knowledge of big data technologies (Spark, Hive) o Experience with AI-driven analytics and decision-making systems
Posted 1 month ago
6 - 9 years
20 - 25 Lacs
Bengaluru
Work from Office
GEN AI Developer: Experience: 6-9 years Location: Bangalore Job Title: Technical Lead Work from Office Required Qualifications: Bachelors or masters degree in computer science, AI, Data Science, or a related field. 6+ years of experience in Python development, including AI/ML applications. Strong expertise in Generative AI, NLP, and Machine Learning frameworks (e.g., PyTorch, TensorFlow, Hugging Face). Deep knowledge of cloud platforms (AWS, GCP, Azure) and serverless architectures (e.g., AWS Lambda, GCP Cloud Functions, Azure Functions). Hands-on experience with CI/CD pipelines, Docker, Kubernetes, and infrastructure automation tools (e.g., Terraform, CloudFormation). Must be interested in building test / QE solutions
Posted 1 month ago
5 - 10 years
10 - 20 Lacs
Chennai
Work from Office
Note: We are looking for candidate to work from Office & who are serving the notice period Tech Stack: Strong understanding of Python frameworks and libraries Proficiency in SQL and experience with databases like PostgreSQL/ Oracle Experience with cloud services (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Demonstrated problem solving /Algorithms / Hackathon, etc Good to have Machine learning, AI, and Data science exposure Product development experience is preferred
Posted 1 month ago
6 - 10 years
12 - 18 Lacs
Hyderabad
Work from Office
Role & responsibilities 5+ years Hands on in Python and data science like tools pandas, NumPy, SciPy, matplotlib and strong exposure in regular expressions. 5+ years Hands on experience in Machine learning algorithms like SVM, CART, Bagging and Boosting algorithms, NLP based ML algorithms and Text mining. Hands on expertise to integrate multiple data sources in a streamlined pipeline. Strong knowledge in OOPs and creating custom python packages for serverless applications. Takes responsibility and ownership for work assignments, tasks and timelines. Functions as a "hands on" manager, engaging in the work with the team to stay relevant, current, as back-up and aware of the needs of the team. Provides transfer of knowledge to team members and support in learning. Demonstrates excellent written and verbal communication skills, strong analytical skills, and attention to detail. Strong Knowledge in SQL querying. Hands on experience in AWS services like Lambda, EC2, EMR, S3 and Athena, Batch, Textract and Comprehend. Strong Expertise in extracting text, tables, logos from low quality scanned multipage pdfs (80 - 150 dpi) and Images. Good Understanding in Probability and Statistics concepts and ability to find hidden patterns, relevant insights from the data. Expertise in Image Processing tools like OpenCV, scikit-image etc. Knowledge in Training NLP based Machine Learning and Deep Learning models to enhance and validate the text extraction process from Images and PDFs. Inquisitive thinking, passion to consistently learn and implement latest technologies and ability to manage multiple projects. Preferred candidate profile Balance team and individual responsibilities. Build analytic systems and predictive models. Build customer focused Next Best Action, Attribution & Segmentation models. Build, lead and orchestrate the global data science community. Coach, guide and direct teams of internal and vendor resources. Co-create with business / market / functions or IT platforms on requirements. Communicate complex analytical findings and implications to business leaders. Conduct and lead meetings and workshops to explain findings. Contribute to building a positive team spirit. Create alternative model approaches to assess complex model design and advance future capabilities. Design and build data set processes for modeling, data mining, and production purposes. Design and develop customized interactive reports and dashboards in Tableau. Design and develop machine learning / artificial intelligence (ML / AI) algorithms. Design and evaluate novel scalable approaches to experimentation. Design and implement prototypes using Python and Java programming language. Determine best format to execute ideas. Determine new ways to improve data and search quality, and predictive capabilities. Good to Have Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy.
Posted 1 month ago
2 - 3 years
4 - 5 Lacs
Bengaluru
Work from Office
As a skilled Developer, you are responsible for building tools and applications that utilize the data held within company databases. The primary responsibility will be to design and develop these layers of our applications and to coordinate with the rest of the team working on different layers of IT infrastructure. A commitment to collaborative problem solving, sophisticated design and quality product is essential Python Developer Necessary Skills: Have experience in data wrangling and manipulation with Python/Pandas. Experience with Docker containers. Knowledge of data structures, algorithms and data modeling. Experience with versioning (Git, Azure DevOps). Design and implementation of ETL/ELT pipelines. Should have good knowledge and experience on web scrapping (Scrapy, BeautifulSoup, Selenium) Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Design, build, and maintain efficient, reusable, and reliable Python code. (SOLID, Design principles) Have experience in SQL database (Views, Stored Procedure, etc.) Responsibilities and Activities Aside from the core development role this job position includes auxiliary roles that are not related to development. The role includes but is not limited to: Support and maintenance of customs and previously developed tools, as well as excellence of performance and responsiveness of new applications. Deliver high quality and reliable applications, including Development and Front-End. In addition, you will maintain code quality, prioritize organization, and drive automatization. Participate in the peer review of plans, technical solutions, and related documentation (Map/document technical procedures). Identify security issues, bottlenecks, and bugs, implementing solutions to mitigate and address issues of service data security and data breaches. Work with SQL / Postgres databases: installing and maintaining database systems, supporting server management, including Backups. In addition to troubleshooting issues raised by the Data Processing team.
Posted 1 month ago
4 - 9 years
5 - 10 Lacs
Bengaluru
Work from Office
Hello, We are hiring for "Sr Python Developer +ML" for Bangalore location. Exp :4+Years Loc: Bangalore Notice period : Immediate joiners(notice period served candidates) JD: 1. Good hands-on experience with python programming language. 2. Knowledge of Data preparation and data analysis using python pandas or equivalent. 3. Building Machine learning models, end to end steps form Data understanding and inference generation. 4. Making of reports using python visualization libraries like matplotlib, seaborn etc. 5. Experience with ML use cases, supervised and un-supervised learning. 6. Understanding of 'Recommendation systems' 7. Awareness about AWS cloud components like S3, lambda, Sagemaker. 8. Awareness about GCP cloud components like GCS, vertex AI, notebook scheduling. 9. Database- Biqquery- to perform CRUD. 10. Some idea with Linux systems to be able to interact with server and setup/run cron jobs. 11. Knowledge about implementing deep learning/ new AI models 12. Soft skills - team communication, analytical ability
Posted 1 month ago
3 - 5 years
6 - 9 Lacs
Chennai
Hybrid
We do have an urgent opening for the role of Python Developer with an MNC in the IT Sector. 1. Strong skills in core Python 2. Experiences with libraries like NumPy,Pandas or any other data processing 3. Autonomous and individual contribution in Python development 4. Able to communicate to business and get requirements clearly 5. Good eye for clean design principle Experience: 3-5 yrs Location: Chennai (Chengalpattu) work Mode: Hybrid- 3 days' work from office Notice Period: 0- 30 days
Posted 1 month ago
- 5 years
5 - 10 Lacs
Bengaluru
Work from Office
KEY ACCOUNTABILITIES Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good to have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. #LI-AA6
Posted 1 month ago
- 5 years
15 - 20 Lacs
Bengaluru
Work from Office
KEY ACCOUNTABILITIES Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good to have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. #LI-MP1
Posted 1 month ago
2 - 6 years
8 - 13 Lacs
Mumbai
Work from Office
About The Role : In Scope of Position based Promotions (INTERNAL only) Job TitleCapital & Liquidity Management Analyst LocationMumbai, India Corporate TitleAnalyst Role Description Group Capital Management plays a central role in the execution of DBs strategy. While Group Capital Management manages DB Groups solvency ratios (CET 1, T1, Total capital ratio, leverage ratio, MREL/TLAC ratios, ECA ratio) together with business divisions and other infrastructure functions, EMEA Treasury manages in addition to the solvency ratios of DBs EMEA entities also the liquidity ratios and Treasury Pool activities. Thereby, EMEA Treasury links into DB Groups strategy and manages execution on a local level. Treasury Treasury at Deutsche Bank is responsible for the sourcing, management and optimization of liquidity and capital to deliver high value risk management decisions. This is underpinned by a best-in-class integrated and consistent Treasury risk framework, which enables Treasury to clearly identify the Banks resource demands, transparently set incentives by allocating resource costs to businesses and manage to evolving regulation. Treasurys fiduciary mandate, which encompasses the Banks funding pools, Asset and liability management (ALM) and fiduciary buffer management, supports businesses in delivering their strategic targets at global and local level. Further Treasury manages the optimization of all financial resources through all lenses to implement the groups strategic objective and maximize long term return on average tangible shareholders equity (RoTE). The current role is part of the Treasury Office in DBC Mumbai. The role requires interactions with all key hubs i.e. London, New York, Frankfurt and Singapore. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities The core deliverables for this role are Write code and implement solution based on specifications. Update, design and implement changes to existing software architecture. Build complex enhancements and resolve bugs. Build and execute unit tests and unit plans. Implementation tasks are varied and complex needing independent judgment. Build a technology solution which is sustainable, repeatable, agile. Align with business and gain understanding of different treasury functions. Your skills and experience Must have core capabilities of strong development experience in Python and Oracle based application Strong in Algorithm, Data Structures and SQL Some experience with Integration/build/testing tools Good to have working knowledge of visualization libraries like plotly, matplotlib, seaborn etc. Exposure to webservice, webserver/application server-based development would be added advantage but not mandatory A basic understanding of Balance sheet and Treasury concepts is desirable but not mandatory Effective organizational and interpersonal skills Self-starting willingness to get things done A highly motivated team player with strong technical background and good communication skills Urgency Prioritize based on need of hour An aptitude to learn new tools and technologies Engineering graduate / BS or MS degree or equivalent experience relevant to functional area 3 + years software engineering or related experience is a must How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
5 - 9 years
12 - 36 Lacs
Hyderabad
Work from Office
Sr. AI/ML Python Developer - 5-8 yrs * cross-functional teams on data analysis & statistics. * Develop ML models - Python, NumPy, Pandas, DLS & NLP. * Imple data pipelines, deploy models on TensorFlow Serving & GCP. Drop to rajkalyan@garudaven.com Food allowance Annual bonus Provident fund Health insurance Office cab/shuttle
Posted 1 month ago
5 - 8 years
13 - 18 Lacs
Mohali, Gurugram, Bengaluru
Work from Office
Job Title: Sr Data Engineer – Snowflake & Python About the Role: We are seeking a skilled and proactive Sr Data Developer with 5-8 years of hands-on experience in Snowflake , Python , Streamlit , and SQL , along with expertise in consuming REST APIs and working with modern ETL tools like Matillion, Fivetran etc. The ideal candidate will have a strong foundation in data modeling , data warehousing , and data profiling , and will play a key role in designing and implementing robust data solutions that drive business insights and innovation. Key Responsibilities: Design, develop, and maintain data pipelines and workflows using Snowflake and an ETL tool (e.g., Matillion, dbt, Fivetran, or similar). Develop data applications and dashboards using Python and Streamlit. Create and optimize complex SQL queries for data extraction, transformation, and loading. Integrate REST APIs for data access and process automation. Perform data profiling, quality checks, and troubleshooting to ensure data accuracy and integrity. Design and implement scalable and efficient data models aligned with business requirements. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver actionable solutions. Implement best practices in data governance, security, and compliance. Required Skills and Qualifications: Experience in HR Data and databases is a must. 5–8 years of professional experience in a data engineering or development role. Strong expertise in Snowflake , including performance tuning and warehouse optimization. Proficient in Python , including data manipulation with libraries like Pandas. Experience building web-based data tools using Streamlit . Solid understanding and experience with RESTful APIs and JSON data structures. Strong SQL skills and experience with advanced data transformation logic. Experience with an ETL tool commonly used with Snowflake (e.g., dbt , Matillion , Fivetran , Airflow ). Hands-on experience in data modeling (dimensional and normalized), data warehousing concepts , and data profiling techniques . Familiarity with version control (e.g., Git) and CI/CD processes is a plus. Preferred Qualifications: Experience working in cloud environments (AWS, Azure, or GCP). Knowledge of data governance and cataloging tools. Experience with agile methodologies and working in cross-functional teams. Experience in HR Data and databases. Experience in Azure Data Factory
Posted 1 month ago
8 - 11 years
20 - 32 Lacs
Bengaluru
Work from Office
-8+ yrs Python 3.x- Design patterns ,Strong problem-solving and analytical skills Django, Streamlit, Dash, or similar, REST, GraphQL- pandas, numpy, scipy SQL and NoSQL postgresql , CI/CD, Docker, Kubernetes/AWS, Azure, or GCP/Terraform,
Posted 1 month ago
3 - 7 years
20 - 25 Lacs
Pune
Remote
1. Extract and transform data from Google BigQuery and other relevant data sources. 2. Utilize Python and libraries such as Pandas and NumPy to manipulate, clean, and analyze large datasets. 3. Develop and implement Python scripts to automate data extraction, processing, and analysis for comparison reports. 4. Design and execute queries in BigQuery to retrieve specific data sets required for comparison analysis.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2