Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Computer Vision Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Computer Vision (CV) Engineers with expertise in image/video processing, object detection, and generative AI. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Develop and optimize computer vision models for tasks like object detection, image segmentation, and multi-object tracking. Lead research on novel techniques using deep learning frameworks (TensorFlow, PyTorch, JAX). Build efficient computer vision pipelines and optimize models for real-time performance. Deploy models using microservices (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Lead MLOps practices, including CI/CD pipelines, model versioning, and training optimizations. Required Skills: Expert in Python, OpenCV, NumPy, and deep learning architectures (e.g., ViTs, YOLO, Mask R-CNN). Strong knowledge in computer vision fundamentals, including feature extraction and multi-view geometry with experience in deploying and optimizing models with TensorRT, Open VINO, and cloud/edge solutions. Proficient with MLOps tools (ML flow, DVC), CI/CD, and distributed training frameworks. Experience in 3D vision, AR/VR, or LiDAR processing is a plus. Nice to Have: Experience with multi-camera vision systems, LiDAR, sensor fusion, and reinforcement learning for vision tasks. Exposure to generative AI models (e.g., Stable Diffusion, GANs) and large-scale image processing (Apache Spark, Dask). Research publications or patents in computer vision and deep learning. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Required skills and qualifications 5+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. MATLAB experience is must Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred skills and qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Role: Fulltime- Permanent Location: Chennai Hybrid: 2 Days WFO, 3 Days WFH Preferably immediate joiners Key Skills: Strong GCP Cloud experience Proficiency in AI tools used to prepare and automate data pipelines and ingestion Apache Spark, especially with MLlib PySpark and Dask for distributed data processing Pandas and NumPy for local data wrangling Apache Airflow – schedule and orchestrate ETL/ELT jobs Google Cloud (BigQuery, Vertex AI) Python (most popular for AI and data tasks) About us OneMagnify is a global performance marketing company that blends brand strategy, data analytics, and cutting-edge technology to drive measurable business results. With a strong focus on innovation and collaboration, OneMagnify partners with clients to create personalized marketing solutions that enhance customer engagement and deliver real-time impact. The company is also known for its award-winning workplace culture, emphasizing employee growth and inclusion. 🌟 Why Join OneMagnify? Top Workplace: Consistently recognized in the U.S. & India for a great work culture. Cutting-Edge Tech: Work with modern tools like Databricks, Snowflake, Azure, and MLflow. Growth-Focused: Strong career paths, upskilling, and learning opportunities. Global Impact: Collaborate across teams on high-impact, data-driven projects. Great Benefits: Competitive salary, insurance, paid holidays, and more. Meaningful Work: Solve real-world business challenges with innovative solutions. Show more Show less
Posted 2 weeks ago
0 years
2 - 9 Lacs
Chennai
On-site
Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As a Senior Data Science Engineer in IOL’s Data Team, you will lead the development of advanced predictive models to power a smart caching layer for our B2B hospitality marketplace. Handling an unprecedented scale of data—2 billion searches, 1 billion price verifications, and 100 million bookings daily—you will design machine learning solutions to predict search patterns and prefetch data from 3P suppliers, reducing their infrastructure load and improving system reliability. This role demands deep expertise in big data, machine learning, and distributed systems, as well as the ability to architect scalable, data-driven solutions in a fast-paced environment. The Challenge IOL operates a high-traffic B2B marketplace that matches hotel room supply with demand. Our platform processes: Searches : 2 billion daily queries for hotel prices based on hotel ID, room type, check-in date, length of stay, and party size. Price Verifications : 1 billion daily checks to confirm pricing. Bookings : 100 million daily bookings. Key Responsibilities Predictive Modeling : Design and implement machine learning models to predict high-demand search patterns based on historical data (e.g., hotel IDs, room types, dates, and party sizes). Big Data Processing : Develop scalable data pipelines to process and analyze massive datasets (2 billion searches daily) using distributed computing frameworks. Smart Caching Layer : Architect and optimize a predictive cache prefetcher that proactively populates the cache cluster (Redis) with high-value data during 3P off- peak hours. Data Analysis : Leverage Elasticsearch and ES Searches Log to extract insights from search patterns, seasonal trends, and user behavior. Model Optimization : Continuously refine predictive models to handle the massive permutations of search parameters, ensuring high accuracy and low latency. Collaboration : Work with the Data Team, platform engineers, and 3P proxy teams to integrate models into the existing architecture (Load Balancer, API Gateway, Service Router, Cache Cluster). Performance Monitoring : Monitor cache hit/miss ratios, model accuracy, and system performance, using tools like Cache Stats Collector to drive optimization. Scalability : Ensure models and pipelines scale horizontally to handle increasing data volumes and traffic spikes. Innovation : Stay updated on advancements in machine learning, big data, and distributed systems, proposing novel approaches to enhance predictive capabilities. Required Skills & Qualifications Education : Master’s or Ph.D. in Data Science, Computer Science, Statistics, or a related field. Experience : o 7+ years of experience in data science, with a focus on machine learning and predictive modeling. o 5+ years of hands-on experience processing and analyzing big data sets (terabyte-scale or larger) in distributed environments. o Proven track record of building and deploying machine learning models in production for high-traffic systems. Technical Skills : o Deep expertise in machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and algorithms (e.g., regression, clustering, time-series forecasting, neural networks). o Extensive experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka) for distributed data processing. o Proficiency in Elasticsearch for search and analytics, including querying and indexing large datasets. o Strong programming skills in Python, with experience in data science libraries (e.g., Pandas, NumPy, Dask). o Familiarity with Redis or similar in-memory data stores for caching. o Knowledge of cloud platforms (e.g., AWS, Azure, GCP) for deploying and scaling data pipelines.o Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) for data extraction and transformation. o Proficiency in designing and optimizing data pipelines for high-throughput, low-latency systems. Problem-Solving : Exceptional ability to tackle complex problems, such as handling massive permutations of search parameters and predicting trends in dynamic datasets. Communication : Strong written and verbal communication skills to collaborate with cross-functional teams and present insights to stakeholders. Work Style : Self-motivated, proactive, and able to thrive in a fast-paced, innovative environment. Preferred Skills Experience in the hospitality or travel industry, particularly with search or booking systems. Familiarity with real-time data streaming and event-driven architectures (e.g., Apache Kafka, Flink). Knowledge of advanced time-series forecasting techniques for seasonal and cyclical data. Exposure to reinforcement learning or online learning for dynamic model adaptation. Experience optimizing machine learning models for resource-constrained environments (e.g., edge devices or low-latency systems). Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
11 - 21 Lacs
Bhubaneswar, Pune, Bengaluru
Work from Office
About Client Hiring for One of Our Multinational Corporations! Job Title : Python Developer Qualification : Any Graduate or Above Relevant Experience : 5 to 12 Years Must Have Skills : 1. Python 2. Pandas 3. Numpy 4. Flask/Django 5. Docker 6. SQL Roles and Responsibilities : 1. Design, develop, test, and deploy Python-based applications. 2. Work with libraries such as Pandas and NumPy for data manipulation and analysis. 3. Develop RESTful APIs using Flask or Django . 4. Containerize applications using Docker for deployment in cloud or on-premise environments. 5. Interact with SQL databases for data storage, retrieval, and optimization. 6. Write clean, maintainable, and efficient code following best practices. 7. Collaborate with front-end developers, data engineers, and DevOps teams to deliver complete solutions. 8. Troubleshoot and resolve technical issues as they arise. Location : Bangalore,Pune,Hyderabad CTC Range : Upto 30 LPA (Lakhs Per Annum) Notice period : 90 days Mode of Interview : Virtual Mode of Work : Work From Office [MADHUSHREE C] [HR Recruiter] Black and White Business Solutions Pvt Ltd Bangalore, Karnataka, INDIA. Direct Number: 08067432410 | WhatsApp 8431051997 | madhushree.c@blackwhite.in | www.blackwhite.in
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have: Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview : As a Sr. Machine Learning Engineer, you will oversee the entire lifecycle of machine learning models. Your role involves collaborating with cross-functional teams, including data scientists, data engineers, software engineers, and DevOps specialists, to bridge the gap between experimental model development and reliable production systems. You will be responsible for automating ML pipelines, optimizing model training and serving, ensuring model governance, and maintaining the stability of deployed systems. This position requires a blend of experience in software engineering, data engineering, and machine learning systems, along with a strong understanding of DevOps practices to enable faster experimentation, consistent performance, and scalable ML operations. You Will Do : Work with Data Science Leadership and Stakeholders to understand business objectives, map scope of work, and support colleagues in achieving technical deliverables. Invest in strong relationships with colleagues and build a successful followership around a common goal. Build and optimize ML pipelines for feature engineering, model training, and inference. Develop low-latency, high-throughput model endpoints for distributed environments. Maintain cloud infrastructure for ML workloads, including GPUs/TPUs, across platforms like GCP, AWS, or Azure Troubleshoot, debug, and validate ML systems for performance and reliability. Write and maintain automated tests (unit and integration). Supports discussions with Data Engineers to work on data collection, storage, and retrieval processes. Collaborate with Data Governance to identify data issues and propose data cleansing or enhancement solutions. Drive continuous improvement efforts in enhancing performance and providing increased functionality, including developing processes for automation. Skills : Group Work Lead: Ability to lead portions of pod iteratives; can clearly communicate priorities and play an effective technical support role for colleagues. Communication: Maintaining timely communication with management and stakeholders on project progress, issues, and concerns. D Consultive Mindset: Go beyond just providing analytics and actively engage stakeholders to understand their challenges and goals. Cloud & ML Ops: Expertise in managing cloud-based ML infrastructures (GCP, AWS, or Azure), coupled with DevOps practices, ensures seamless model deployment, scalability, and system reliability. This includes containerization, CI/CD pipelines, and infrastructure-as-code tools. Proficiency in programming languages such as Python, SQL, and Java. Who You Are 4+ years of industry experience working with machine learning tools and technologies. Experience using Tensorflow, PyTorch, scikit-learn, Kubeflow, pandas and numpy. and frameworks like Ray, Dask preferred. Expertise in data engineering, object-oriented programming, and familiarity with microservices and cloud technologies. An ongoing learner who seeks out emerging technology and can influence others to think innovatively. Regularly required to sit, talk, hear; use hands/fingers to touch, handle, and feel. Occasionally required to move about the workplace and reach with hands and arms. Requires close vision. Able to work a flexible schedule based on department and company needs. Show more Show less
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving. Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 6-9 years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Introduction: A Career at HARMAN Lifestyle We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. As a member of HARMAN Lifestyle, you connect consumers with the power of superior sound. Contribute your talents to high-end, esteemed brands like JBL, Mark Levinson and Revel Unite your passion for audio innovation with high-tech product development Create pitch-perfect, cutting-edge technology that elevates the listening experience What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need To Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challenging career opportunities, professional training, and competitive compensation. (www.harman.com) Show more Show less
Posted 3 weeks ago
3.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We're Hiring! Machine Learning Engineer for a B2B SaaS based startup in Noida. 🔹 Position: Machine Learning Engineer 🔹 Experience: 3.5 Years to 5 Years 🔹 Location: Noida, Sector 90 🔹 Work Mode: 5 Days | Work From Office 🔹 Notice Period: Immediate to 30 Days Key Responsibilities Design, develop, and optimize machine learning models for various business applications. Build and maintain scalable AI feature pipelines for efficient data processing and model training. Develop robust data ingestion, transformation, and storage solutions for big data. Implement and optimize ML workflows, ensuring scalability and efficiency. Monitor and maintain deployed models, ensuring performance, reliability, and retraining when necessary. Qualifications and Experience Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3.5+ years of experience in machine learning, deep learning, or data science roles. Proficiency in Python and ML frameworks/tools such as PyTorch, Langchain Experience with data processing frameworks like Spark, Dask, Airflow, and Dagster Hands-on experience with cloud platforms (AWS, GCP, Azure) and ML services. Experience with MLOps tools like MLflow, Kubeflow Familiarity with containerisation and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to work in a fast-paced environment. Strong communication and collaboration skills. If you’re ready to take on a leadership role and thrive in a dynamic startup environment, share your profile at gautam@mounttalent.com. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description As a Senior Data Scientist, you will play a crucial role in developing and deploying advanced machine learning models and data science solutions. You will collaborate closely with cross-functional teams to analyze data, build predictive models, and deploy solutions at scale. The ideal candidate will have a strong foundation in machine learning, statistics, and data modeling, coupled with expertise in Python and experience with distributed computing frameworks such as PySpark, Dask, and web development frameworks like FastAPI and Streamlit. Responsibilities Lead the development and implementation of machine learning models to address complex business challenges and opportunities. Conduct comprehensive data analysis to extract insights and inform decision-making processes. Collaborate with stakeholders to define project objectives, requirements, and success criteria. Develop and optimize machine learning algorithms to improve performance, efficiency, and scalability. Deploy machine learning models into production environments and ensure reliability and scalability. Monitor model performance and conduct regular evaluations to identify areas for optimization and improvement. Utilize explainable AI techniques to interpret model predictions and provide actionable insights to stakeholders. Conduct A/B testing to evaluate model performance and validate hypotheses. Leverage distributed computing frameworks such as PySpark and Dask to handle large-scale data processing tasks. Develop interactive web applications for data visualization and model deployment using FastAPI and Streamlit. Stay current with advancements in machine learning, statistics, and data science techniques and tools. Qualifications Bachelor’s or master’s degree in computer science, Statistics, Mathematics, or a related field. 4+ years of experience in data science, machine learning, or related fields. Proficiency in Python programming language for data analysis, machine learning, and web development. Strong understanding of machine learning algorithms, statistical concepts, and data modeling techniques. Hands-on experience with machine learning libraries such as NumPy, SciPy, and scikit-learn. Experience with distributed computing frameworks such as PySpark and Dask. Familiarity with model monitoring, deployment, and lifecycle management processes. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams. Experience with LLM models is a plus. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
About Demandbase: Demandbase helps B2B companies hit their revenue goals using fewer resources. How? By using the power of AI to identify and engage the accounts and buying groups most likely to purchase. Our account-based technology unites sales and marketing teams around insights that you can understand and facilitates quick actions across systems and channels to deliver big wins. As a company, we’re as committed to growing careers as we are to building world-class technology. We invest heavily in people, our culture, and the community around us. We have offices in the San Francisco Bay Area, New York, Seattle, and teams in the UK and India. We are Great Place to Work Certified. We're committed to attracting, developing, retaining, and promoting a diverse workforce. By ensuring that every Demandbase employee is able to bring a diversity of talents to work, we're increasingly capable of achieving our mission to transform the way B2B companies go to market. We encourage people from historically underrepresented backgrounds and all walks of life to apply. Come grow with us at Demandbase! About the Role: As a Senior ML Engineer, you’ll have a strategic role in driving data-driven insights and developing production-level machine learning models to solve high-impact, complex business problems. This role is suited for an individual with a strong foundation in both deep learning and traditional machine learning techniques, capable of handling challenges at scale. You will work across teams to create, optimize, and deploy advanced ML models, combining both modern approaches (like deep neural networks and large language models) and proven algorithms to deliver transformative solutions. Responsibilities: 1.Machine Learning Model Development and Productionization Develop, implement, and productionize scalable ML models to address complex business issues, optimizing for both performance and efficiency. Create and refine models using deep learning architectures as well as traditional ML techniques. Collaborate with ML engineers and data engineers to deploy models at scale in production environments, ensuring model performance remains robust over time. 2.End-to-End Solution Ownership Translate high-level business challenges into data science problems, developing solutions that are both technically sound and aligned with strategic goals. Own the full model lifecycle, from data exploration and feature engineering through to model deployment, monitoring, and continuous improvement. Collaborate with cross-functional teams (product, engineering, analytics & research) to embed data-driven insights into business decisions and product development. End-to-end ownership and resilience in production environment 3.Experimentation, Testing, and Performance Optimization Conduct rigorous A/B tests, evaluate model performance, and iterate on solutions based on feedback and performance metrics. Employ best practices in machine learning experimentation, validation, and hyperparameter tuning to ensure models achieve optimal accuracy and efficiency. 4.Data Management and Quality Assurance Work closely with data engineering teams to ensure high-quality data pipeline design, data integrity, and data processing standards. Actively contribute to data governance initiatives to maintain robust data standards and ensure compliance with best practices in data privacy and ethics. 5.Innovation and Research Stay at the forefront of machine learning research and innovations, particularly in neural networks, generative AI, and LLMs, bringing insights to the team for potential integration. Prototype and experiment with new ML techniques and architectures to improve the capabilities of our data science/ML solutions. Support AI-strategy for Demandbase and align business metrics with data science goals. 6.Mentorship and Team Leadership Mentor junior data scientists/ML Engineers and collaborate with peers, fostering a culture of continuous learning, innovation, and excellence. Lead technical discussions, provide guidance on best practices, and contribute to a collaborative and high-performing team environment. Required Qualifications: Education : B.tech/M.Tech in Computer Science or Data Science. Bachelor’s degree in computer science, statistics, maths, or science. Master’s degree in data science, computer science, or a related field Experience : 8+ years of experience in data science/ML, with a strong emphasis on production-level ML models, including both deep learning and traditional approaches. Technical Skills : Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in Python and experience with data science libraries (e.g., scikit-learn, Pandas, NumPy). Strong grasp of algorithms for both deep neural networks and classical ML (e.g., regression, clustering, SVMs, ensemble models). Experience deploying models in production, using tools like Docker, Kubernetes, and cloud platforms (AWS, GCP). Knowledge of A/B testing, model evaluation metrics, and experimentation best practices. Proficient in SQL and experience with data warehousing solutions. Familiarity with distributed computing frameworks (Spark, Dask) for large-scale data processing. Soft Skills : Exceptional problem-solving skills with a business-driven approach. Strong communication skills to articulate complex ideas and solutions to non-technical stakeholders. Ability to lead projects and mentor team members. Good to have skills Experience with LLMs, transfer learning, or multimodal models for solving advanced business cases. Experience with tools and models such as LLAMA, high-volume recommendation systems, duplicate detection using ML. Understanding of ML Ops practices and tools (e.g., MLflow, Airflow) for streamlined model deployment and monitoring. Experience in data observability, CI/CD What We Offer Opportunity to work in a cutting-edge environment, solving real-world business problems at scale. Competitive compensation and benefits, including health, wellness, and educational allowances. Professional growth opportunities and support for continuous learning. This role is ideal for a data science/ML Engineer who is passionate about applying advanced machine learning and AI to drive business value in a fast-paced, high-impact environment. If you’re eager to innovate and push boundaries in a collaborative and forward-thinking team, we’d love to meet you! Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Key Responsibilities: Development and training of foundational models across modalities End to end lifecycle management of foundational model development from data curation to model deployment by collaborating with the core team members Conduct research to advance model accuracy and efficiency. Implement state-of-the-art AI techniques in Text/Speech and language processing. Collaborate with cross-functional teams to build robust AI stacks and integrate them seamlessly into production pipelines. Develop pipelines for debugging, CI/CD and observability of the development process. Demonstrated ability to lead projects and provide innovative solutions. Should document technical processes, model architectures, and experimental results., maintain clear and organized code repositories. Education: Bachelor’s or Master’s in any related field with 2 to 5 years of experience in industry in applied AI/ML. Minimum Requirements: Proficiency in Python programming and familiarity with 3-4 from the list of tools specified below: Foundational model libraries and frameworks (TensorFlow, PyTorch, HF Transformers, NeMo, etc) Experience with distributed training (SLURM, Ray, Pytorch DDP, Deepspeed, NCCL, etc) Inference servers (vLLM) Version control systems and observability (Git, DVC, MLFlow, W&B, KubeFlow) Data analysis and curation tools (Dask, Milvus, Apache Spark, Numpy) Text-to-Speech tools (Whisper, Voicebox, VALL-E (X), HuBERT/Unitspeech) LLMOPs Tools, Dockers etc Hands on experience with AI application libraries and frameworks (DSPy, Langgraph, langchain, llamaindex etc.) Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Enterprise Architecture services, will provide you with the opportunity to bring our clients a competitive advantage through defining their technology objectives, assessing solution options, and devising architectural solutions that help them achieve both strategic goals and meet operational requirements. We help build software and design data platforms, manage large volumes of client data, develop compliance procedures for data management, and continually researching new technologies to drive innovation and sustainable change. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are looking for a seasoned Python Developers Responsibilities Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Mandatory Skill Sets Strong Python with Experience Data Structure. Strong In OOPS. Problem Solving skills Preferred Skill Sets 3+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills Years Of Experience Required 4+ + Yrs Education Qualification BE/B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Job Description We are looking for a passionate Python developer to join our team at [Company X]. You will be responsible for developing and implementing high-quality software solutions, creating complex applications using cutting-edge programming features and frameworks and collaborating with other teams in the firm to define, design and ship new features. As an active part of our company, you will brainstorm and chalk out solutions to suit our requirements and meet our business goals. You will also be working on data engineering problems and building data pipelines. You would get ample opportunities to work on challenging and innovative projects, using the latest technologies and tools. If you enjoy working in a fast-paced and collaborative environment, we encourage you to apply for this exciting role. We offer industry-standard compensation packages, relocation assistance, and professional growth and development opportunities. Objectives of this role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required skills and qualifications 3+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred skills and qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community. Desired Skills Leadershipteam workCommunicationPython Full StackWeb Development Remotely Allowed: In-office Show more Show less
Posted 1 month ago
5 - 10 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 month ago
7 - 10 years
19 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Python Developer(Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn, and PyTorch) As part of the team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organizations financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers.
Posted 1 month ago
5 - 8 years
2 - 7 Lacs
Hosur
Work from Office
We are looking for an experienced Python Developer who possessesadeep understanding of the Python programming language and its ecosystem. Theideal candidate will be responsible for designing, implementing, and maintaininghigh- performance, scalable applications while collaborating with cross-functional teamstodeliver exceptional software solutions. The role offers an exciting opportunity toworkondiverse projects and leverage emerging technologies to drive innovation. Qualifications: 1. Bachelor's degree in Computer Science, Engineering, or related field; Master'sdegree preferred. 2. Proven experience (5 to 8 years) as a Python Developer or similar role, withastrongportfolio of Python-based projects and applications. 3. Proficiency in Python programming language and its standard libraries, frameworks, and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn, andPyTorch 4. Experience REST API libraries and frameworks such as Django, Flask, andSQLAlchemy. 5. Solid understanding of object-oriented programming (OOP) principles, datastructures, and algorithms. 6. Experience with database design, SQL, and ORM frameworks (e.g., SQL Alchemy, Django ORM). 7. Familiarity with front-end technologies such as HTML, CSS, JavaScript, and client-sideframeworks (e.g., React, Angular, Vue.js). 8. Knowledge of version control systems (e.g., Git) and collaborative developmentworkflows (e.g., GitHub, GitLab). 9. Strong analytical and problem-solving skills, with a keen attention to detail andapassion for continuous improvement. 10. Excellent communication and interpersonal skills, with the ability to collaborateeffectively in a team environment and communicate technical concepts tonon- technical stakeholders. Preferred Qualifications: 1. Experience with cloud platforms and services (e.g., AWS, Azure, GoogleCloudPlatform). 2. Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). 3. Understanding of software testing principles and practices, including unit testing, integration testing, and test automation frameworks (e.g., pytest). 4. Familiarity with DevOps practices and CI/CD pipelines for automated softwaredeployment and delivery.
Posted 1 month ago
0 years
0 Lacs
Navi Mumbai, Maharashtra
Remote
As an expectation a fitting candidate must have/be: Ability to analyze business problem and cut through the data challenges. Ability to churn the raw corpus and develop a data/ML model to provide business analytics (not just EDA), machine learning based document processing and information retrieval Quick to develop the POCs and transform it to high scale production ready code. Experience in extracting data through complex unstructured documents using NLP based technologies. Good to have : Document analysis using Image processing/computer vision and geometric deep learning Technology Stack: Python as a primary programming language. Conceptual understanding of classic ML/DL Algorithms like Regression, Support Vectors, Decision tree, Clustering, Random Forest, CART, Ensemble, Neural Networks, CNN, RNN, LSTM etc. Programming: Must Have: Must be hands-on with data structures using List, tuple, dictionary, collections, iterators, Pandas, NumPy and Object-oriented programming Good to have: Design patterns/System design, cython ML libraries : Must Have: Scikit-learn, XGBoost, imblearn, SciPy, Gensim Good to have: matplotlib/plotly, Lime/sharp Data extraction and handling: Must Have: DASK/Modin, beautifulsoup/scrappy, Multiprocessing Good to have: Data Augmentation, Pyspark, Accelerate NLP/Text analytics: Must Have: Bag of words, text ranking algorithm, Word2vec, language model, entity recognition, CRF/HMM, topic modelling, Sequence to Sequence Good to have: Machine comprehension, translation, elastic search Deep learning: Must Have: TensorFlow/PyTorch, Neural nets, Sequential models, CNN, LSTM/GRU/RNN, Attention, Transformers, Residual Networks Good to have: Knowledge of optimization, Distributed training/computing, Language models Software peripherals: Must Have: REST services, SQL/NoSQL, UNIX, Code versioning Good to have: Docker containers, data versioning Research: Must Have: Well verse with latest trends in ML and DL area. Zeal to research and implement cutting areas in AI segment to solve complex problems Good to have: Contributed to research papers/patents and it is published on internet in ML and DL Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 1 month ago
5 - 8 years
15 - 30 Lacs
Gurugram
Remote
Expert in Python, with knowledge of at least one Python web framework such as Flask, FAST, Django etc. Knowledge and experience on using Python libraries like Numpy, Pandas, Polars, Dask. Familiarity with some ORM (Object Relational Mapper) libraries. Familiarity with Relational and/or No-SQL databases. Proficient in using editors like VS Code, Eclipse, PyCharm, PyScripter etc. Understanding of fundamental design principles behind a scalable application. Strong unit test and debugging skills. Proficient understanding of code versioning tools such as Git, Mercurial or SVN. Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3. Knowledge of Kafka would be a huge plus.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Scope We need a data Engineer who has knowledge around Python, OOP Concepts, SQL, sqlalchemy, Docker, Airflow, Github actions, pandas, Dask(or some distributed framework like pyspark), Software architecture to manage the data engineering side of things for Clearance Pricing What You’ll Do Primary Duties and Responsibilities Consistently delivers solid quality in both design and implementation and helps the team shape what is built how, in particular: Develops quality software (including effective test code) according to clean code principles and Blue Yonder standards.Provides input for the prioritization of issues in the backlog and autonomously pulls issues or supports other team members as appropriate.Participates in team activities such as backlog grooming, planning, daily stand-ups, and retrospectives.Translates business requirements to user stories and actively seeks feedback by the stakeholders.Takes the lead in designs of individual stories and translates the design to subtasks.Considers aspects of information security while coding and reviewing other's code.Keeps up to date with technology and researches latest trends in the industry.Is perceived as the expert in a small area and is the go-to person for related implementational and operational issues. Additions for service-delivering teams Raises operational concerns during design phases.Produces actionable user stories to relieve operational pain.Plans and performs more complex changes and manages stakeholder expectations.Independently resolves incidents, drives associated post-mortem analyses, and ensures outcome is actionable for the team.Strives to replace service requests with self-service functionality and automation.Understands cost structure of delivered services and makes cost data transparent to users. Secondary Duties And Responsibilities Actively provides feedback during code reviews.Onboards new members to the team and helps develop junior engineers.Understands functional and technical requirements of software components.Participates in team hiring activities.Feeds larger refactoring opportunities into the team's backlog.Evolves the team’s continuous integration pipeline and fixes broken builds.Performs benchmark analyses, identifies hot spots, and derives appropriate measures to improve performance.Demonstrates problem solving and innovation ability.Acts according to company and team visions and requires user stories to adhere to those visions.Has a deep understanding of their team’s problem domain.Clearly understands and communicates the impact of changes in the team’s deliverables on other teams and customers.Timely and proactively communicates impediments to commitments and helps others to overcome theirs. What We Are Looking For Proficiency in Python with OOP and SOLID principlesExperience with Django/Flask/FastAPIProficiency in Python with OOP and SOLID principlesExperience with Django/Flask/FastAPIKnowledge of SQL/Snowflake/Exasol, SQLAlchemyFamiliarity with Pandas, NumPy, Dask (or a distributed framework like PySpark)Experience with TerraformProficiency in Airflow, DockerKnowledge of Git, GitHub Actions Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 1 month ago
5 - 8 years
0 Lacs
Greater Hyderabad Area
On-site
Scope Lead the creation and deployment of machine learning models for demand forecasting. Discover opportunities to enhance supply chain efficiency by applying various data science techniques, analyzing customer data, and experimenting to solve real-world business challenges in retail planning. Design and develop data-intensive systems using Python and frameworks like Dask, with an emphasis on performance and scalability. Develop forecasting models to provide insights into expected returns, helping retailers adjust stocking levels, labor, and reordering strategies. What You’ll Do Primary Duties and Responsibilities Consistently delivers solid quality and helps the team, in particular: Is responsible for designing, developing, and testing new algorithms, models, or solving approaches based on machine learning, operation research, or other techniques to solve a Blue Yonder business problem under the supervision of a senior team member or manager. Stays current with scientific libraries and development tools. Continuously improves themselves and the code they produce. Develops quality software according to clean code principles and Blue Yonder standards. Has an understanding of the problem domain the team works on. Is perceived as the expert in a small area within the team and is the go-to person for related implementational and operational issues. Participates in the operation of machine learning services through rolling out changes, resolving incidents, and fulfilling service requests. Clearly communicates impediments and actively seeks support by team members to overcome obstacles. Secondary Duties And Responsibilities Participates in team activities such as backlog grooming, planning, daily standups, and retrospectives. Demonstrates problem solving and innovation ability. Integrates new models and algorithms into a product solution with limited assistance. Translates business requirements to user stories and actively seeks feedback by the stakeholders. Timely and proactively communicates impediments to commitments. Communicates information in the team. Communicates information internally and assists in external communication with customers and partners. Acts according to company and team visions and requires user stories to adhere to those visions. Writes effective test cases. Has mastered some of Blue Yonder's domain expertise with knowledge acquired from past projects. Actively provides feedback during code reviews. Understands functional and technical requirements of software components. Participates in translating scientific contributions into patents, publications, or conferences. Autonomously pulls issues from the team backlog or supports other team members with their issues as appropriate. Participates in team hiring activities. Understands infrastructure costs of machine learning algorithms embedded in delivered products and services and makes diligent use of provided resources. Identifies inefficient use of computational resources in the machine learning pipeline and uses adequate patterns and technologies to resolve this. Scales machine learning pipeline to meet throughput targets and time windows. What We Are Looking For Machine Learning, Deep Learning, Statistical Analysis Optimization: Markov Decision Process, dynamic programming, etc. Data Visualization: Streamlit, etc. Python, Object-Oriented Programming (OOP) Concepts SQL, Snowflake Pandas, PyTorch, scikit-learn, etc. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2