Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
11 - 18 Lacs
Noida
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
3.0 - 8.0 years
9 - 19 Lacs
Ahmedabad
Hybrid
We're seeking an experienced AI/ML Engineer who thrives on curiosity and innovation to join our dynamic engineering team. This role offers an exciting chance to tackle impactful real-world machine learning challenges and play a key part in developing advanced, state-of-the-art AI solutions. Role & responsibilities Architect, train, fine-tune, and distill LLMs and deep-learning models for NLP, generative AI, and agentic AI applications. Assist in designing, developing, and training machine learning models using structured and unstructured data Conduct thorough testing, evaluation, benchmarking, and validation to ensure performance and robustness Deploy models into production using MLOps pipelines (CI/CD, versioning, monitoring) Optimize models and inference pipelines for latency, throughput, and compute efficiency Mentor junior engineers and collaborate cross-functionally with data engineers, DevOps, product managers Curate, cleanse, and transform extensive datasets to prepare them for robust model development. Conduct insightful exploratory data analysis and apply advanced statistical modeling techniques. Execute systematic experiments, optimize hyperparameters, and rigorously assess models using established performance metrics. Maintain comprehensive documentation detailing model architectures, data pipelines, and experimental outcomes. Develop and integrate AI-enhanced tools including web-search functionalities, intelligent image editing, and related applications. Support in integrating AI/ML models into production environments Skill Sets We Require- Frameworks: PyTorch (latest, including TorchDynamo), TensorFlow 2.x Finetuning & Distillation: Implement teacherstudent training, supervised finetuning, responsebased distillation and Transorfermers. Model Lifecycle Hands-on experience with Python and ML libraries such as scikit-learn, pandas, NumPy. Understanding of probability, statistics, and linear algebra APIs & Storage: RESTful APIs, ONNX, SQL/NoSQL databases, S3 Buckets. CI/CD: Git, GitHub Actions, Docker Registry, automated testing & deployment (Good to have) Required Qualifications- Btech/BE in Computer Science or Equivalent Field 4+ years of hands-on experience in training, tuning, and deploying LLMs and deep-learning models. Proficient with PyTorch, Transformer frameworks
Posted 1 month ago
4.0 - 7.0 years
20 - 25 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a highly skilled and experienced Senior Data Scientist to join our dynamic team. The ideal candidate should have a minimum 4s of years of experience in data science, with hands-on experience in developing and implementing Generative AI solutions. The Senior Data Scientist will be responsible for developing Machine Learning models and collaborating with cross-functional teams to solve complex business problems How will you make an impact? Develop and execute advanced analytics projects from end to end, including data collection, preprocessing, model development, evaluation, and deployment. Develop predictive models and machine learning algorithms to extract actionable insights from large and complex datasets. Utilize statistical techniques and quantitative analysis to identify trends, patterns, and correlations within the data. Collaborate with stakeholders to understand business requirements and translate them into analytical solutions that drive value and impact. Stay abreast of the latest advancements in Data Science, Machine Learning, Generative AI and recommend innovative approaches to solve business challenges Have you got what it takes? Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field; advanced degree (Master's or Ph.D.) preferred. Minimum of 4 years of hands-on experience in data science and machine learning , with at least 6 months of experience in Generative AI development. Proficiency in programming languages such as Python or R, as well as experience with data manipulation and analysis libraries (e.g., pandas, NumPy, scikit-learn, Hugging Face - Transformers, LangChain etc.). Strong understanding of machine learning techniques and algorithms, including supervised and unsupervised learning, regression, classification, clustering, and deep learning. Strong understanding of LLMs, NLP techniques, and evaluation methods for generative outputs. Solid foundation in prompt engineering for optimizing AI-generated outputs across different tasks and domains. Excellent problem-solving skills and ability to work independently as well as collaboratively in a fast-paced environment. Strong communication and interpersonal skills, with the ability to effectively communicate complex technical concepts to diverse audiences. Proven leadership abilities, with experience in mentoring junior team members and leading cross-functional projects. Publications or contributions to the data science community, such as conference presentations, research papers, or open-source projects. Preferred Qualifications: Experience working in industries such as finance, banking. Familiarity with cloud computing platforms (e.g., AWS, Azure, Google Cloud) and related services for building and deploying machine learning models. Knowledge of data visualization tools (e.g., Tableau, Power BI) for creating interactive dashboards and reports. Publications or contributions to the data science community, such as conference presentations, research papers, or open-source projects. Hands-on experience with vector databases (e.g. Pinecone) and embedding techniques. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7409 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
12.0 - 15.0 years
36 - 41 Lacs
Mumbai, Pune
Work from Office
Artificial Intelligence and Machine Learning Engineering – Associate Director, Corporate Technology
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Kanpur
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Hyderabad
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Bengaluru
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Jaipur
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Lucknow
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
8.0 - 10.0 years
11 - 18 Lacs
Chennai
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 month ago
10.0 - 20.0 years
30 - 45 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Architect scalable, high-performance Python applications and solutions. Ensure adherence to design principles, patterns, and coding standards in Python development.Proficiency in Python libraries (NumPy, Pandas, scikit-learn, TensorFlow, PyTorch).
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Okta is seeking a highly skilled Full-Stack Engineer with deep expertise in AWS Bedrock, generative AI, and modern software development to join our fast-moving team at the intersection of developer experience, machine learning, and enterprise software. As part of the Okta Business Technology (BT) team, you will build cutting-edge tools that make AI development intuitive, collaborative, and scalable. If you're passionate about building next-generation AI applications and empowering developers through innovative platforms, this is the role for youJob Duties and Responsibilities Design and develop full-stack applications that integrate seamlessly with Amazon Bedrock AI agents. Build scalable, production-grade AI/ML solutions using AWS Bedrock and the AWS Agent Development Kit. Implement back-end services and APIs to interact with foundation models for tasks such as automated sourcing, content generation, and prompt orchestration. Create intuitive and performant front-end interfaces using Angular that connect with GenAI capabilities. Ensure seamless integration of LLMs and foundation models into the broader application architecture. Explore and rapidly prototype with the latest LLMs and GenAI tools to iterate on new capabilities and features. Build sophisticated AI workflows using knowledge bases, guardrails, and prompt chaining/flows. Deploy and maintain enterprise-ready GenAI applications at scale. Collaborate with business analysts to understand customer needs and use cases, and work with the team to design, develop POCs to test, implement & support solutions Foster strong relationships with teammates, customers, and vendors to facilitate effective communication and collaboration throughout the project lifecycle Perform in-depth analysis of requirements, ensuring compatibility and adherence to established standards and best practicesRequired Skills: 7+ years of robust experience with hands-on development & design experience 3+ years of experience in one or more of the following areasDeep Learning, LLMs, NLP, Speech, Conversational AI, AI Infrastructure, Fine-tuning, and optimizations of PyTorch models. Software development experience in languages like Python (must have) and one from the optional (Go, Rust, and C/C++). Experience with at least one LLM such as Llama, GPT, Claude, Falcon, Gemini, etc. Expertise in AWS Bedrock and the AWS Agent Development Kit is mandatory Hands-on experience with Python libraries including boto3, NumPy, Pandas, TensorFlow or PyTorch, and Hugging Face Transformers Solid understanding of the AWS ecosystem (e.g., CloudWatch, Step Functions, Kinesis, Lambda) Familiarity with full software development lifecycle, including version control, CI/CD, code reviews, automated testing, and production monitoring Knowledge about the ERP, HR technology and Legal business processes is an advantageEducation and Certifications Bachelors degree in Computer Science or a related field, or equivalent practical experience AWS certifications (e.g., AWS Certified Solutions Architect, Machine Learning Specialty) are a plus Experience working with large-scale generative AI or LLM-based applications Knowledge of secure application development and data privacy practices in AI/ML workloads"This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."
Posted 1 month ago
0.0 - 2.0 years
2 - 4 Lacs
Bengaluru
Work from Office
Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received. Meet the Team We are a dynamic and innovative team of Data Engineers, Data Architects, and Data Scientists based in Bangalore, India. Our mission is to harness the power of data to provide actionable insights that empower executives to make informed, data-driven decisions. By analyzing and interpreting complex datasets, we enable the organization to understand the health of the business and identify opportunities for growth and improvement. Your Impact We are seeking a highly experienced and skilled Senior Data Scientist to join our dynamic team. The ideal candidate will possess deep expertise in machine learning models, artificial intelligence (AI), generative AI, and data visualization. Proficiency in Tableau and other visualization tools is essential. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Key Responsibilities Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. Key Technologies & Skills: Machine Learning ModelsSupervised learning, unsupervised learning, reinforcement learning, deep learning, neural networks, decision trees, random forests, support vector machines (SVM), clustering algorithms, etc. AI TechniquesNatural language processing (NLP), computer vision, generative adversarial networks (GANs), transfer learning, etc. Visualization ToolsTableau, Power BI, Matplotlib, Seaborn, Plotly, etc. DatabasesSnowflake, Teradata, SQL, NoSQL databases. Programming LanguagesPython (essential), R, SQL. Python LibrariesTensorFlow, PyTorch, scikit-learn, pandas, NumPy, Keras, SciPy, etc. Data ProcessingETL processes, data warehousing, data lakes. Cloud PlatformsAWS, Azure, Google Cloud Platform. Minimum Qualifications Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Data Science, or a related field. Minimum of [X] years of experience as a Data Scientist or in a similar role. Proven track record in developing and deploying machine learning models and AI solutions. Strong expertise in data visualization tools, particularly Tableau. Extensive experience with Snowflake and Teradata databases. Excellent problem-solving skills and the ability to work independently and collaboratively. Exceptional communication skills with the ability to convey complex information clearly. Preferred Qualifications (Provide up to five (5) bullet points these can include soft skills) Excellent communication and collaboration skills to work effectively in cross-functional teams. Ability to translate business requirements into technical solutions. Strong problem-solving skills and the ability to work with complex datasets. Experience in statistical analysis and machine learning techniques. Understanding of business domains such as sales, financials, marketing, and telemetry.
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Designation : Python + AWS Experience : 5+ Years Work Location : Bangalore / Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Job Description : Mandatory Skills: Python Data structures pandas, numpy Data Operations - DataFrames, Dict, JSON, Lists, Tuples, Strings Oops & APIs(Flask/FastAPI) AWS services(IAM, EC2, Lambda, S3, DynamoDB, etc) Sincerely, Sonia TS
Posted 1 month ago
4.0 - 9.0 years
11 - 15 Lacs
Pune
Work from Office
Project description We are looking for an experienced technical developer to work for one of our client from banking industry. Project goal is to maintain and develop solutions focused on digital onboarding platform. Responsibilities As a Python Developer, you will be responsible for designing, developing, and maintaining Python-based applications and services. You will collaborate with cross-functional teams to deliver high-quality software solutions that meet the needs of our business divisions. Your role will involve: Skills Must have Proven experience of more than 4 years as a Python Developer or similar role. Proficiency in Python and its frameworks such as Django or Flask. Strong knowledge of back-end technologies and RESTful APIs. Experience in Pandas, and knowledge of AI/ML libraries like TensorFlow, PyTorch, etc. Experience of working on Kubernetes / OpenShift, Dockers, Cloud Native frameworks. Experience with database technologies such as SQL, NoSQL, and ORM frameworks. Familiarity with version control systems like Git. Familiarity with Linux and Windows Operating Systems. Knowledge of Shell scripting (Bash for Linux and PowerShell for Windows) will be advantageous. Understanding of Agile methodologies and DevOps practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work independently and as part of a team. A degree in Computer Science, Engineering, or a related field is preferred. Nice to have Experience in Agile Framework Other Languages EnglishC1 Advanced Seniority Senior
Posted 1 month ago
5.0 - 9.0 years
9 - 13 Lacs
Gurugram
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.
Posted 1 month ago
3.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In your role, you may be responsible for Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred technical and professional experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred
Posted 1 month ago
3.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
An AI Data Scientist at IBM is not just a job title - it’s a mindset. You’ll leverage the watsonx,AWS Sagemaker,Azure Open AI platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. We are seeking an experienced and innovative AI Data Scientist to be specialized in foundation models and large language models. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties: Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge SharingDocument solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and InnovationStay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus (e.g. Amazon Code Whisperer, Github Copilot etc.) * Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindsetDemonstrate a growth mindset to understand clients' business processes and challenges. Experience in python and pyspark will be added advantage Preferred technical and professional experience ExperienceProven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms. Strong track record in scientific publications or open-source communities Experience in full AI project lifecycle, from research and prototyping to deployment in production environments
Posted 1 month ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Work with broader team to build, analyze and improve the AI solutions. You will also work with our software developers in consuming different enterprise applications Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services. Proficient in Python with focus on Data Analytics Packages. Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions. Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines. Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models Preferred technical and professional experience Experience with conversation analytics. Experience with cloud technologies Experience with data exploration tools such as Tableu
Posted 1 month ago
2.0 - 5.0 years
14 - 17 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
2.0 - 5.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
2.0 - 5.0 years
14 - 17 Lacs
Bengaluru
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
4.0 - 9.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Job Title - Retail Specialized Data Scientist Level 9 SnC GN Data & AI Management Level:09 - Consultant Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools:Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases:SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.
Posted 1 month ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France