Jobs
Interviews

339 Dask Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We are looking for a talented Python AI/ML Developer with a strong foundation in machine learning, natural language processing, and data science. If you have a passion for building smart, conversational AI systems and want to work on cutting-edge technology, we’d love to hire you! Benefits 5 Days a Week Health Insurance Flexible Timings Open Work Culture Workshops & Webinars Awards & Recognition Festive Celebrations Key Responsibilities Advanced Model Development: Design and implement cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to address specific business challenges. AI Agent and Chatbot Development: Create conversational AI agents capable of delivering seamless, human-like interactions, from foundational models to fine-tuning chatbots tailored to client needs. Retrieval-Augmented Generation (RAG): Develop and optimize RAG models, enhancing AI’s ability to retrieve and synthesize relevant information for accurate responses. Framework Expertise: Leverage LLAMAIndex and LangChain frameworks for building agent-driven applications that interact with large language models (LLMs). Data Infrastructure: Expertise in managing and utilizing data lakes, data warehouses (including Snowflake), and Databricks for large-scale data storage and processing. Machine Learning Operations (MLOps): Manage the full lifecycle of machine learning projects, from data preprocessing and feature engineering through model training, evaluation, and deployment, with a solid understanding of MLOps practices. Data Analysis & Insights: Conduct advanced data analysis to uncover actionable insights and support data-driven strategies across the organization. Cross-Functional Collaboration: Partner with cross-departmental stakeholders to align AI initiatives with business needs, developing scalable AI-driven solutions. Mentorship & Leadership: Guide junior data scientists and engineers, fostering innovation, skill growth, and continuous learning within the team. Research & Innovation: Stay at the forefront of AI and deep learning advancements, experimenting with new techniques to improve model performance and enhance business value. Reporting & Visualization: Develop and present reports, dashboards, and visualizations to effectively communicate findings to both technical and non-technical audiences. Cloud-Based AI Deployment: Utilize AWS Bedrock, including tools like Mistral and Anthropic Claude, to deploy and manage AI models at scale, ensuring optimal performance and reliability. Web Framework Integration: Build and deploy AI-powered applications using web frameworks such as Django and Flask, enabling seamless API integration and scalable backend services. Technical Skills Deep Learning & Machine Learning: Extensive hands-on experience with PyTorch, TensorFlow, and scikit-learn, along with large-scale data processing. Programming & Data Engineering: Strong programming skills in Python or R, with knowledge of big data technologies such as Hadoop, Spark, and advanced SQL. Data Infrastructure: Proficiency in managing and utilising data lakes, data warehouses, and Databricks for large-scale data processing and storage. MLOps & Data Handling: Familiar with MLOps and experienced in data handling tools like pandas and dask for efficient data manipulation. Cloud Computing: Advanced understanding of cloud platforms, especially AWS, for scalable AI/ML model deployment. AWS Bedrock: Expertise in deploying models on AWS Bedrock, with tools such as Mistral and Anthropic Claude. AI Frameworks: Skilled in LLAMAIndex and LangChain, with practical experience in agent-based applications. Data Visualization: Proficient in visualization tools like Tableau, Power BI for clear data presentation. Analytical & Communication Skills: Strong problem-solving abilities with the capability to convey complex technical concepts to diverse audiences. Team Collaboration & Leadership: Proven success in collaborative team environments, with experience in mentorship and leading innovative data science projects. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Experience : 1 to 3 years specializing in deep learning, including extensive experience in PyTorch and TensorFlow. Industry Expertise: Experience in finance, manufacturing, healthcare, or retail sectors. Advanced AI Knowledge: Familiarity with reinforcement learning, NLP, and generative models. Location: Ahmedabad Reporting to: Project Manager

Posted 4 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Senior Full Stack Data Scientist We are seeking a Senior Full Stack Data Scientist who can own the entire lifecycle of data-driven solutions— from data collection and transformation to model deployment and ongoing maintenance. This role requires deep technical expertise, as well as strong communication and leadership skills, to ensure that analytics initiatives align with business goals and consistently deliver measurable impact. About the Team Our data function is a multidisciplinary group of data scientists, engineers, and analysts working together to produce scalable, high-impact data products. We foster a culture of innovation , collaboration , and continuous learning , using state-of-the-art technologies to tackle real business challenges. Key Responsibilities End-to-End Data Product Ownership Design and manage full-stack data solutions from data ingestion (ETL/ELT) to model deployment and performance monitoring. Work with business stakeholders to define project scopes, translate ambiguous requirements into actionable data science tasks, and deliver results. Advanced Analytics s Machine Learning Develop and implement statistical and ML models (e.g., predictive modeling, classification, clustering, time-series forecasting). Employ advanced ML techniques such as Bayesian methods, reinforcement learning, or metaheuristics, as needed. Integrate data science workflows with analytics platforms (e.g., Spark, Dask) for large-scale or real-time processing. Software Engineering s DevOps Follow OOP principles and design patterns in Python for clean, maintainable code. Set up CI/CD pipelines , containerization (Docker), and orchestration (Kubernetes) to enable robust, automated deployments. Optimize performance and manage cost on cloud platforms (Azure, AWS, or GCP) by structuring resources effectively. Front-End s Visualization Build or enhance user-facing dashboards using frameworks like Streamlit , Plotly Dash , or enterprise solutions (e.g., PowerBI). Present actionable insights in a clear, interactive format that resonates with non-technical audiences. Real-Time s Large-Scale Data Architecture Design pipelines for real-time data streaming (e.g., Kafka, Spark Streaming) where business needs require continuous data updates. Work with data engineers to maintain data lakes or data warehouses , ensuring efficient storage and retrieval for diverse use cases. Security s Compliance Adhere to data governance and regulatory guidelines (GDPR, HIPAA, etc.) relevant to your industry. Implement secure coding practices and access controls to protect sensitive data assets. Performance Tuning s Cost Optimization Continuously monitor , profile , and refine data pipelines and ML models to ensure minimal latency and reduced computational costs. Utilize cloud-native monitoring tools (Azure Monitor, AWS CloudWatch, GCP Stackdriver) for alerts , logging , and budget management. Mentorship s Leadership Provide technical guidance and coaching to junior data scientists and data engineers, encouraging best practices. Lead architecture and design reviews, fostering a culture of quality and collaboration across the data organization. What We Look For Education s Experience MS or PhD in Computer Science, Statistics, Mathematics, Artificial Intelligence , or a related field. 8+ years of experience delivering end-to-end data solutions in data science, analytics, or machine learning. Technical Expertise Proficiency in Python , with strong skills in OOP , design patterns , and software engineering best practices. Mastery of ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch) and proficiency in NumFOCUS libraries (pandas, NumPy, SciPy). Familiarity with cloud platforms (Azure, AWS, GCP), containerization (Docker), and orchestration (Kubernetes). Experience with version control (Git) and building CI/CD pipelines for data and ML products. Ability to handle large-scale data (Spark, Dask, or similar) and real-time streaming (Kafka, Flink) when required. Analytical s Communication Skills Deep knowledge of statistics , machine learning , and optimization techniques, paired with the ability to convey technical results to diverse audiences. Experience in data visualization and dashboard creation, conveying complex information in an understandable manner. Proven record of collaborating across business, engineering, and product management teams. Soft Skills s Mindset Commitment to continuous learning and staying current with emerging data science and engineering trends. Strong leadership qualities with a knack for mentoring , problem-solving, and managing stakeholder expectations. Flexible and agile approach to adapting in a fast-paced environment and delivering high-quality outputs. Why Join Us Strategic Impact : Work on mission-critical projects where data science informs key decisions and directly influences the bottom line. Cutting-Edge Technology : Leverage a modern tech stack and ample freedom to experiment with new tools and methodologies. Leadership s Growth : Shape the technical direction of the data organization, mentoring talent and establishing best practices. Collaborative Environment : Join a supportive team culture that values shared learning, innovation, and collective problem-solving. Work-Life Balance : Enjoy a flexible schedule with remote-friendly policies and competitive compensation. If you’re passionate about end-to-end data solutions and have the depth of expertise to drive data projects from conception through deployment , we invite you to become our Senior Full Stack Data Scientist . Bring your blend of data science , software engineering , and strategic thinking to propel our organization toward data-driven excellence. Apply now to embark on this exciting journey!

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

Work Schedule Standard (Mon-Fri) Environmental Conditions Office Job Description Job Summary: We need a diligent Data Engineer III to join our dedicated data team, contributing to impactful projects. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines for both batch and real-time processing. Extract, transform, and load (ETL) data from a wide variety of structured and unstructured data sources including: RESTful and SOAP APIs 1. Databases (SQL, NoSQL) Cloud storage (e.g., S3, Google Cloud Storage) 1. File formats (e.g., JSON, CSV, XML, Parquet) Web scraping tools where appropriate Build reusable data connectors and integration solutions to automate data ingestion. Collaborate with internal collaborators to understand data needs and guarantee data is accessible and user-friendly. Monitor and optimize pipeline performance and troubleshoot data flow issues. Ensure data governance, security, and quality standards are strictly applied across all pipelines. Maintain and scale data warehouse or data lake environments (e.g., Snowflake, Redshift, BigQuery). Apply data manipulation and analysis libraries like Pandas, Polars, or Dask for efficient handling of large datasets. Build and craft data flow and architecture diagrams to visually represent data pipelines, system integrations, and data models, ensuring clarity and alignment among technical and non-technical collaborators. Requirements: Technical Skills: Proficiency in SQL and at least one programming language (Python, Pyspark, Java, Scala). Experience with data pipeline and workflow tools (e.g., Apache Airflow, AWS Data Pipeline). Knowledge of relational and non-relational databases (e.g., Oracle, SqlServer, MongoDB). Strong data modeling and data warehousing skills. Education & Experience: Bachelor’s degree or equivalent experience in Computer Science, Engineering, Information Systems, or related field (Master’s a plus). 5+ years of proven experience in a data engineering or similar role. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Diligent and proactive approach.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Work Schedule Standard (Mon-Fri) Environmental Conditions Office Job Summary: We need a diligent Data Engineer III to join our dedicated data team, contributing to impactful projects. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines for both batch and real-time processing. Extract, transform, and load (ETL) data from a wide variety of structured and unstructured data sources including: RESTful and SOAP APIs Databases (SQL, NoSQL) Cloud storage (e.g., S3, Google Cloud Storage) File formats (e.g., JSON, CSV, XML, Parquet) Web scraping tools where appropriate Build reusable data connectors and integration solutions to automate data ingestion. Collaborate with internal collaborators to understand data needs and guarantee data is accessible and user-friendly. Monitor and optimize pipeline performance and troubleshoot data flow issues. Ensure data governance, security, and quality standards are strictly applied across all pipelines. Maintain and scale data warehouse or data lake environments (e.g., Snowflake, Redshift, BigQuery). Apply data manipulation and analysis libraries like Pandas, Polars, or Dask for efficient handling of large datasets. Build and craft data flow and architecture diagrams to visually represent data pipelines, system integrations, and data models, ensuring clarity and alignment among technical and non-technical collaborators. Requirements: Technical Skills: Proficiency in SQL and at least one programming language (Python, Pyspark, Java, Scala). Experience with data pipeline and workflow tools (e.g., Apache Airflow, AWS Data Pipeline). Knowledge of relational and non-relational databases (e.g., Oracle, SqlServer, MongoDB). Strong data modeling and data warehousing skills. Education & Experience: Bachelor’s degree or equivalent experience in Computer Science, Engineering, Information Systems, or related field (Master’s a plus). 5+ years of proven experience in a data engineering or similar role. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Diligent and proactive approach.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: Senior Machine Learning Engineer Location: [Remote] Experience Required: 7+ Years About the Role We are seeking an experienced and highly skilled Senior Machine Learning Engineer to design, develop, and deploy advanced ML solutions that solve complex business problems. The ideal candidate will have deep expertise in ML algorithms, data processing pipelines, and scalable production deployments, along with strong problem-solving skills and the ability to mentor junior engineers. Key Responsibilities ML Model Development: Design, build, and optimize machine learning models for various business applications such as predictive analytics, NLP, computer vision, recommendation systems, or anomaly detection. Data Engineering: Develop robust data ingestion, preprocessing, and feature engineering pipelines using large, complex, and multi-modal datasets. Model Deployment & Scalability: Deploy ML models to production environments, ensuring low latency, high availability, and scalability (e.g., using cloud services like AWS Sagemaker, GCP AI Platform, or Azure ML). Research & Innovation: Stay updated with the latest ML and AI advancements, experiment with cutting-edge algorithms, and recommend their applicability to business needs. Collaboration: Work closely with data scientists, product managers, software engineers, and stakeholders to translate requirements into scalable ML solutions. Monitoring & Maintenance: Implement monitoring and retraining pipelines to ensure models remain accurate and relevant over time. Mentorship: Guide junior team members in best practices, code reviews, and project delivery. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field (Ph.D. is a plus). 7+ years of professional experience in ML engineering or applied machine learning. Proficiency in Python (and relevant libraries such as TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy). Strong understanding of ML algorithms, deep learning architectures, and statistical modeling. Experience with data processing frameworks (Spark, Dask, or equivalent) and SQL/NoSQL databases. Hands-on experience deploying ML models to production (REST APIs, microservices, containerization with Docker/Kubernetes). Expertise with cloud-based ML platforms (AWS, GCP, or Azure). Solid understanding of MLOps principles and tools (MLflow, Kubeflow, Airflow, CI/CD pipelines). Strong problem-solving skills with the ability to handle ambiguous requirements. Excellent communication and collaboration skills. Preferred Qualifications Experience with NLP, LLMs, or transformer-based architectures. Knowledge of reinforcement learning, graph neural networks, or other advanced ML techniques. Background in big data analytics and distributed computing. Publications, patents, or open-source contributions in the ML/AI field.

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking an AI/ML Developer to become a valuable member of our team consisting of researchers, data scientists, and developers. Your primary responsibility will be to engage in the creation of cutting-edge AI solutions spanning various sectors such as commerce, agriculture, insurance, financial markets, and procurement. Your tasks will revolve around crafting and enhancing machine learning and generative AI models to tackle real-world issues effectively. Your duties will include but are not limited to developing and optimizing ML, NLP, Deep Learning, and Generative AI models. You will be required to explore and deploy state-of-the-art algorithms for both supervised and unsupervised learning. Working extensively with large-scale datasets within distributed environments is also a key aspect of this role. Understanding business processes to identify and apply the most suitable ML methodologies, ensuring the scalability and performance of ML solutions, collaborating with cross-functional teams, and addressing intricate data integration and deployment obstacles are among the responsibilities you will undertake. To excel in this role, you must possess a solid background in Machine Learning, Deep Learning, NLP, and Generative AI. Proficiency in utilizing frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers is essential. Hands-on experience with LLMs, model fine-tuning, and prompt engineering is also highly desirable. Proficiency in programming languages like Python, R, or Scala for ML development is crucial. Furthermore, familiarity with cloud-based ML platforms like AWS, Azure, GCP, as well as experience in big data processing using tools like Spark, Hadoop, or Dask is advantageous. You should be capable of scaling ML models from prototypes to production, possess strong analytical and problem-solving skills, and be adept at effectively communicating results through data visualization. If you are enthusiastic about exploring the frontiers of ML and GenAI, we are excited to receive your application and learn more about how you can contribute to our team.,

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Qualifications: Educational Background: Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field. PhD preferred. Certifications in Cloud Computing (AWS, Azure, GCP) and ML technologies are a plus. Technical Skills: 1. Expertise in Python and related frameworks (Pydantic, FastAPI, Poetry, etc.) for building scalable AI/ML solutions. 2. Proven experience with indexing technologies: Building, managing, and optimizing vector databases (Pinecone, FAISS, Weaviate) and search engines (Elasticsearch, OpenSearch). 3. Machine Learning/AI Development: Hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow) and fine-tuning LLMs for retrieval-based tasks. 4. Cloud Services & Infrastructure: Deep expertise in architecting and deploying scalable, containerized AI/ML services on cloud platforms using Docker, Kubernetes, and infrastructure-as-code tools like Terraform or Pulumi. 5. Data Engineering: Strong understanding of ETL pipelines, distributed data processing (e.g., Apache Spark, Dask), and data orchestration frameworks (e.g., Apache Airflow, Prefect). 6. APIs Development: Skilled in designing and building RESTful APIs with a focus on user-facing services and seamless integration for developers. 7. Full Stack Engineering: Knowledge of front-end/back-end interactions and how AI models interact with user interfaces. 8. DevOps & MLOps: Experience with CI/CD pipelines, version control (Git), model monitoring, and logging in production environments. Experience with LLMOps tools (Langsmith, MLflow) is a plus. 9. Data Storage: Experience with SQL and NoSQL databases, distributed storage systems, and cloud-native data storage solutions (S3, Google Cloud Storage).

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

We are looking for a skilled and enthusiastic Applied AI/ML Engineer to be a part of our team. As an Applied AI/ML Engineer, you will be responsible for leading the entire process of foundational model development, focusing on cutting-edge generative AI techniques. Your main objective will be to implement efficient learning methods for data and compute, specifically addressing challenges relevant to the Indian scenario. Your tasks will involve optimizing model training and inference pipelines, deploying production-ready models, ensuring scalability through distributed systems, and fine-tuning models for domain adaptation. Collaboration with various teams will be essential as you work towards building strong AI stacks and seamlessly integrating them into production pipelines. Apart from conducting research and experiments, you will be crucial in converting advanced models into operational systems that generate tangible results. Your leadership in this field will involve working closely with technical team members and subject matter experts, documenting technical processes, and maintaining well-structured codebases to encourage innovation and reproducibility. This position is perfect for proactive individuals who are passionate about spearheading significant advancements in generative AI and implementing scalable solutions for real-world impact. Your responsibilities will include: - Developing and training foundational models across different modalities - Managing the end-to-end lifecycle of foundational model development, from data curation to model deployment, through collaboration with core team members - Conducting research to enhance model accuracy and efficiency - Applying state-of-the-art AI techniques in Text/Speech and language processing - Collaborating with cross-functional teams to construct robust AI stacks and smoothly integrate them into production pipelines - Creating pipelines for debugging, CI/CD, and observability of the development process - Demonstrating project leadership and offering innovative solutions - Documenting technical processes, model architectures, and experimental outcomes, while maintaining clear and organized code repositories To be eligible for this role, you should hold a Bachelor's or Master's degree in a related field and possess 2 to 5 years of industry experience in applied AI/ML. Minimum requirements for this position include proficiency in Python programming and familiarity with 3-4 tools from the specified list below: - Foundational model libraries and frameworks (TensorFlow, PyTorch, HF Transformers, NeMo, etc) - Experience with distributed training (SLURM, Ray, Pytorch DDP, Deepspeed, NCCL, etc) - Inference servers (vLLM) - Version control systems and observability (Git, DVC, MLFlow, W&B, KubeFlow) - Data analysis and curation tools (Dask, Milvus, Apache Spark, Numpy) - Text-to-Speech tools (Whisper, Voicebox, VALL-E (X), HuBERT/Unitspeech) - LLMOPs Tools, Dockers, etc - Hands-on experience with AI application libraries and frameworks (DSPy, Langgraph, langchain, llamaindex, etc),

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: Senior Machine Learning Engineer Location: Remote Experience Required: 7+ Years About The Role We are seeking an experienced and highly skilled Senior Machine Learning Engineer to design, develop, and deploy advanced ML solutions that solve complex business problems. The ideal candidate will have deep expertise in ML algorithms, data processing pipelines, and scalable production deployments, along with strong problem-solving skills and the ability to mentor junior engineers. Key Responsibilities ML Model Development: Design, build, and optimize machine learning models for various business applications such as predictive analytics, NLP, computer vision, recommendation systems, or anomaly detection. Data Engineering: Develop robust data ingestion, preprocessing, and feature engineering pipelines using large, complex, and multi-modal datasets. Model Deployment & Scalability: Deploy ML models to production environments ensuring low latency, high availability, and scalability (e.g., using cloud services like AWS Sagemaker, GCP AI Platform, or Azure ML). Research & Innovation: Stay updated with the latest ML and AI advancements, experiment with cutting-edge algorithms, and recommend their applicability to business needs. Collaboration: Work closely with data scientists, product managers, software engineers, and stakeholders to translate requirements into scalable ML solutions. Monitoring & Maintenance: Implement monitoring and retraining pipelines to ensure models remain accurate and relevant over time. Mentorship: Guide junior team members in best practices, code reviews, and project delivery. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field (Ph.D. is a plus). 7+ years of professional experience in ML engineering or applied machine learning. Proficiency in Python (and relevant libraries such as TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy). Strong understanding of ML algorithms, deep learning architectures, and statistical modeling. Experience with data processing frameworks (Spark, Dask, or equivalent) and SQL/NoSQL databases. Hands-on experience deploying ML models to production (REST APIs, microservices, containerization with Docker/Kubernetes). Expertise with cloud-based ML platforms (AWS, GCP, or Azure). Solid understanding of MLOps principles and tools (MLflow, Kubeflow, Airflow, CI/CD pipelines). Strong problem-solving skills with the ability to handle ambiguous requirements. Excellent communication and collaboration skills.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact At McKinsey you will work on real-world, high-impact projects across a variety of industries. You will have the opportunity to collaborate with QB/Labs teams and build complex and innovative ML systems to accelerate our work in AI and help solve business problems at speed and scale. You will experience the best environment to grow as a technologist and a leader. You will develop a sought-after perspective connecting technology and business value by working on real-life problems across a variety of industries and technical challenges to serve our clients on their changing needs. You will be surrounded by inspiring individuals as part of diverse and multidisciplinary teams. You will develop a holistic perspective of AI by partnering with the best design, technical, and business talent in the world as your team members. While we advocate for using the right tech for the right task, we often leverage the following technologies: Python, PySpark, the PyData stack, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS, container technologies such as Docker and Kubernetes, cloud solutions such as AWS, GCP, Azure, and more. You will work with other data scientists, data/ML engineers, designers, project managers and business subject matter experts on interdisciplinary projects across various industry sectors to enable business ambitions with data & analytics. You are a highly collaborative individual who is capable of laying aside your own agenda, listening to and learning from colleagues, challenging thoughtfully and prioritizing impact. You search for ways to improve things and work collaboratively with colleagues. You believe in iterative change, experimenting with new approaches, learning and improving to move forward quickly. As a Data Scientist, you will: Solve the hardest business problems with our clients in multiple industries worldwide while leading research and development of state-of-the-art Machine Learning and statistical methods Play a leading role in bringing the latest advances in AI and deep learning to our clients, collaborating with industry executives and QuantumBlack experts to find and execute opportunities to improve business performance using data and advanced machine learning models Identify machine learning R&D initiatives that have high potential of applicability in industry Work with QuantumBlack leadership and client executives to understand business problems and map them to state-of-the-art analytics and AI solutions Work closely with other data scientists, data engineers, machine learning engineers and designers to build end-to-end analytics solutions for our clients that drive real impact in the real world Perhaps most importantly, you will work in one of the most talented and diverse data science teams in the world Your Qualifications and Skills Bachelor's or master's level in a discipline such as: computer science, machine learning, applied statistics, mathematics, engineering or artificial intelligence 2+ years of deep technical experience in machine learning, advanced analytics and statistics Advanced programming expertise in Python Proven application of advanced analytical, data science and statistical methods in realworld engagements Knowledge in Engineering standards, QA/Risk Management Experience and expertise In GenAI application development (RAG, Agentic flows, etc.) using API integration and orchestration tools such as Langchains, Crew.AI, AutoGen will be an added advantage Willingness to travel both domestic and international Good presentation and communication skills, with a knack for explaining complex analytical concepts and insights to technical as well as non-technical audiences

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be joining Infosys Finacle, a product of EdgeVerve Systems (a subsidiary of Infosys), a global leader in digital banking solutions. In this role as a Data Scientist AI / RPA, you will be responsible for applying statistical methods and machine learning algorithms to large datasets to derive insights and predictions using Python/R packages. Your responsibilities will include expertise in utilizing data science related packages such as sk learn and data manipulation packages like pandas, dask. You should have experience in various modeling techniques including classifications, regression, time series analysis, deep learning, text mining, and NLP. Additionally, you will be analyzing problem statements and devising solutions through creating Proof of Concepts (POCs), collecting relevant data, cleansing data, and possessing basic knowledge of writing SQL queries. To excel in this role, you must have a basic understanding of big data concepts like Hadoop and Hive, as well as experience in ML programming and strong analytical skills. The qualifications required for this position include a B.Tech, M.Tech, MCA, or BE degree. As part of the team, you will have the opportunity to work with cutting-edge technologies and contribute to the digital transformation of financial institutions globally. Infosys Finacle is committed to diversity and inclusivity, providing an equal opportunity workplace for all individuals.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at Zebra, you will play a crucial role in understanding the technical requirements of clients and designing and constructing data pipelines to fulfill those requirements. In addition to developing solutions, you will also supervise the development of other Engineers. Strong verbal and written communication skills are necessary to effectively interact with clients and internal teams. The role demands a solid understanding of databases, SQL, cloud technologies, ecosystems like Databricks and GCP, version control tools such as Github, and modern data integration and orchestration tools like Databricks workflows on Azure/GCP and Airflow. Knowledge of Agentic AI frameworks like Langchain or tools like Crewai is considered a plus. Your responsibilities will include actively participating in the design and implementation of data platforms for AI products, constructing productized and parameterized data pipelines that fuel AI products using GPUs and CPUs, writing efficient data transformation code in Spark (Pyspark, Spark SQL) and Dask, developing workflows to automate data pipelines using Python, creating data validation tests to evaluate input data quality, adhering to best practices of Unit testing and Integration testing, conducting performance testing and profiling of code, building data pipeline frameworks to automate high-volume and real-time data delivery, operationalizing scalable data pipelines to support data science and advanced analytics, optimizing customer data science workloads, and managing cloud services costs and utilization. Knowledge of the Retail/CPG Domain is advantageous. For qualifications, a minimum of a Bachelors, Master's, or Ph.D. Degree in Computer Science or Engineering is required. You should have at least 1+ years of experience programming in languages like Python, Scala, or Go, 1+ years of experience in SQL and data transformation, 1+ years of experience developing distributed systems with open-source technologies like Spark and Dask, and 1+ years of experience with relational or NoSQL databases in Linux environments (MySQL, MariaDB, PostgreSQL, MongoDB, Redis). Experience in AWS/Azure/GCP environments, data models in the Retail and Consumer products industry, agile projects, learning new technologies quickly, excellent verbal and written communication skills, working in innovative and fast-paced environments, collaborating in diverse teams, teleworking, and achieving stretch goals are essential competencies. Please note that to protect candidates from fraudulent activity, Zebra's recruiters will only contact you via @zebra.com email accounts. Applications are exclusively accepted through the applicant tracking system, and personal identifying information is only collected through that system. The Talent Acquisition team will not request personal information via email or outside the system. If you encounter identity theft issues, contact your local police department.,

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

Job Description: Machine Learning Engineer Role: Machine Learning Engineer Job Location: Remote (Occasional travel to Chennai) Experience: 5-8 years About the Role We are seeking a highly skilled and mathematically grounded Machine Learning Engineer to join our AI team. The ideal candidate will have 5+ years of ML experience with a deep understanding of machine learning algorithms, statistical modeling, and optimization techniques, along with hands-on experience in building scalable ML systems using modern frameworks and tools. Key Responsibilities: Design, develop, and deploy machine learning models for real-world applications. Collaborate with data scientists, software engineers, and product teams to integrate ML solutions into production systems. · Understand the mathematics behind machine learning algorithms to effectively implement and optimize them. Conduct mathematical analysis of algorithms to ensure robustness, efficiency, and scalability. Optimize model performance through hyperparameter tuning, feature engineering, and algorithmic improvements. Stay updated with the latest research in machine learning and apply relevant findings to ongoing projects. Required Qualifications Mathematics & Theoretical Foundations Strong foundation in Linear Algebra (e.g., matrix operations, eigenvalues, SVD). Proficiency in Probability and Statistics (e.g., Bayesian inference, hypothesis testing, distributions). Solid understanding of Calculus (e.g., gradients, partial derivatives, optimization). Knowledge of Numerical Methods and Convex Optimization . Familiarity with Information Theory , Graph Theory , or Statistical Learning Theory is a plus. Programming & Software Skills Proficient in Python (preferred), with experience in libraries such as: NumPy, Pandas, Scikit-learn, Matplotlib, Seaborn Experience with deep learning frameworks : TensorFlow, PyTorch, Keras, or JAX Familiarity with ML Ops tools : MLflow, Kubeflow, Airflow, Docker, Kubernetes Experience with cloud platforms (AWS, GCP, Azure) for model deployment. Machine Learning Expertise Hands-on experience with supervised, unsupervised, and reinforcement learning. Understanding of model evaluation metrics and validation techniques. Experience with large-scale data processing (e.g., Spark, Dask) is a plus. Preferred Qualifications Master’s or Ph.D. in Computer Science, Mathematics, Statistics, or a related field. Publications or contributions to open-source ML projects. Experience with LLMs, transformers, or generative models. Integers.Ai is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a highly skilled and motivated Senior DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 3 -5 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

On-site

What you'll do: Design and deploy data-driven solutions for manufacturing customers using Sparsa's agentic SDK and ERP integrations (SAP, Oracle, MES systems) Build translation layers and adapters to convert customer data pipelines into common schemas that our agents can work with Monitor agent performance and create impact reports showing measurable improvements to present to customers Serve as primary feedback channel between customer implementations and AI team, translating real-world challenges into technical requirements Work directly with enterprise customers during onboarding, solution deployment, and technical workshops for manufacturing stakeholders Perform advanced data analysis on manufacturing datasets to identify optimization opportunities and extract actionable insights from production data What we're looking for: 4–6 years of experience as Data Analyst/Scientist or Solutions Engineer Strong software engineering fundamentals with excellent Python proficiency Strong data science skills (pandas, numpy, matplotlib, polars, dask, etc.) with SQL expertise for complex data manipulation Data pipeline and ETL experience for enterprise data integration and workflow orchestration Customer-facing technical skills with proven track record presenting complex solutions to technical stakeholders Solutioning abilities - translating business problems into technical solutions with measurable outcomes Bonus: experience in manufacturing/industrial environments, hands-on experience with ERP systems (SAP, Oracle), MES products (Siemens, GE Digital), knowledge of CRM systems (Salesforce, HubSpot), manufacturing domain knowledge (supply chain, production planning, quality systems) Benefits A pivotal engineering role at a cutting-edge AI company with operations across Asia and Europe. High ownership of end-to-end ML application development. The platform, resources, and autonomy to design, deploy, and scale AI solutions that transform real-world industries globally. Join Us at Sparsa AI If you are passionate about building transformative products at the intersection of AI and industrial operations, we invite you to shape the future with us. This is your opportunity to learn and execute in a fast-growing company that is redefining how the real economy works. At Sparsa AI, you'll work alongside an exceptional team, solve real-world problems, and leave a lasting impact on global industries. Let’s build the future of Industrial AI-Agents—together. If you have the chops, let’s connect!

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

vadodara, gujarat

On-site

As a Lead Data Engineer at Rearc, you will play a crucial role in establishing and maintaining technical excellence within our data engineering team. Your extensive experience in data architecture, ETL processes, and data modeling will be key in optimizing data workflows for efficiency, scalability, and reliability. Collaborating closely with cross-functional teams, you will design and implement robust data solutions that align with business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders is essential as you drive data-driven initiatives and ensure their successful implementation. With over 10 years of experience in data engineering or related fields, you bring a wealth of expertise in managing and optimizing data pipelines and architectures. Your proficiency in Java and/or Python, along with experience in data pipeline orchestration using platforms like Airflow, Databricks, DBT, or AWS Glue, will be invaluable. Hands-on experience with data analysis tools and libraries such as Pyspark, NumPy, Pandas, or Dask is required, while proficiency with Spark and Databricks is highly desirable. Your proven track record of leading complex data engineering projects, coupled with hands-on experience in ETL processes, data warehousing, and data modeling tools, enables you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices, as well as a strong understanding of cloud-based data services and technologies like AWS Redshift, Azure Synapse Analytics, and Google BigQuery. Your strategic and analytical skills will enable you to solve intricate data challenges and drive data-driven decision-making. In this role, you will collaborate with stakeholders to understand data requirements and challenges, implement data solutions with a DataOps mindset using modern tools and frameworks, lead data engineering projects, mentor junior team members, and promote knowledge sharing through technical blogs and articles. Your exceptional communication and interpersonal skills will facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. At Rearc, we empower engineers to build innovative products and experiences by providing them with the best tools possible. If you are a cloud professional with a passion for problem-solving and a desire to make a difference, join us in our mission to solve problems and drive innovation in the field of data engineering.,

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Director Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Distinguished Machine Learning Engineer Distinguished Engineer - Machine Learning Engineering At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Distinguished Engineer - Machine Learning Engineering to join the Machine Learning Experience (MLX) team! As a Capital One Machine Learning Engineer (MLE), you'll be part of a team focusing on observability and model governance automation. You will work with model training and features and serving metadata at scale, to enable automated model governance decisions and to build a model observability platform. You will contribute to building a system to do this for Capital One models, accelerating the move from fully trained models to deployable model artifacts ready to be used to fuel business decisioning and build an observability platform to monitor the models and platform components. The MLX team is at the forefront of how Capital One builds and deploys well-managed ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The ML experience we're creating today is the foundation that enables each of our businesses to deliver next-generation ML-driven products and services for our customers. What You’ll Do Work with model and platform teams to build systems that ingest large amounts of model and feature metadata and runtime metrics to build an observability platform and to make governance decisions. Partner with product and design teams to build elegant and scalable solutions to speed up model governance observability Collaborate as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation big data and machine learning applications. Leverage cloud-based architectures and technologies to deliver optimized ML models at scale Construct optimized data pipelines to feed machine learning models. Use programming languages like Python, Scala, or Java Leverage continuous integration and continuous deployment best practices, including test automation and monitoring, to ensure successful deployments of machine learning models and application code. Basic Qualifications Master's Degree in Computer Science or a related field At least 15 years of experience in software engineering or solution architecture At least 10 years of experience designing and building data intensive solutions using distributed computing At least 10 years of experience programming with Python, Go, or Java At least 8 years of on-the-job experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow At least 5 years of experience productionizing, monitoring, and maintaining models Preferred Qualifications Master’s Degree or PhD in Computer Science, Electrical Engineering, Mathematics, or a similar field 5+ years of experience building, scaling, and optimizing ML systems 5+ years of experience with data gathering and preparation for ML models 10+ years of experience developing performant, resilient, and maintainable code. Experience developing and deploying ML solutions in a public cloud such as AWS, Azure, or Google Cloud Platform 5+ years of experience with distributed file systems or multi-node database paradigms. Contributed to open source ML software Authored/co-authored a paper on a ML technique, model, or proof of concept 5+ years of experience building production-ready data pipelines that feed ML models. Experience designing, implementing, and scaling complex data pipelines for ML models and evaluating their performance 5+ years of experience in ML Ops either using open source tools like ML Flow or commercial tools 2+ Experience in developing applications using Generative AI i.e open source or commercial LLMs No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We have an urgent requirement for Python Developers (3-5 years of exp). Job Description: Our clients data platform consists of a series of Python microservices, which become integrated through REST and RabbitMQ. We are actively searching for an enthusiastic Core Python Developer to become a valuable member of our vibrant team. The selected individual will play a pivotal role in advancing our platform, making it more feature-rich, robust, streamlined, and performant by employing innovative architectural and development principles. Responsibilities: Python Development: Write clean, maintainable, and efficient Python code in line with common Pythonic principles. Develop and maintain Python microservices, ensuring seamless integration with our existing platform stack. Utilize common Python data libraries, such as Pandas, Polars, NumPy, and SciPy for data manipulation and analysis tasks. Familiarity with asynchronous programming in Python using libraries like asyncio or Dask and understanding of concurrency and parallelism concepts. Write unit tests for developed code using pytest to ensure functionality and reliability. Collaborate with the QA team to ensure comprehensive test coverage. API Development & Integration: Design and develop RESTful APIs using frameworks like FastAPI and Flask. Ensure smooth communication between microservices via REST and message brokers like RabbitMQ. Messaging/Caching Systems: Understand and work with pub/sub architectures and are familiar with message brokers including RabbitMQ and Kafka. Implement and manage caching solutions using Redis to enhance application performance. Version Control: Use Git for source code management, adhering to best practices for branching, merging, and collaborative development. Database Operations: Work with database technologies such as PostgreSQL and SQLite, understanding schema design, querying, and optimization. Technical Skills Required : Bachelors degree in Computer Science, Engineering, or a related field. 3-5 years minimum as a Python Developer with a solid understanding of the Python language and its best practices, adhering to common Pythonic principles. Development experience within the paradigms of microservices, cloud technologies and modern containerization platforms, e.g. AWS, Azure, GCP, Docker, and Kubernetes. Proficient in implementing and managing Redis as an in-memory data structure store, used for caching, session management, and real-time analytics. Familiarity with Redis data types, such as strings, lists, sets, and hashes, and their appropriate use cases. Strong understanding of RabbitMQ as a message broker, facilitating asynchronous processing and inter-service communication. Familiarity with RabbitMQ&aposs exchange types, routing, and queue bindings, and the ability to troubleshoot common RabbitMQ issues. Proficient in writing tests using libraries like pytest or unit test to ensure code reliability and functionality. Experience with Object-Relational Mapping tools like SQLAlchemy or Django ORM, simplifying database operations and queries. Strong understanding of relational database concepts, with hands-on experience in designing, querying, and managing data using PostgreSQL, SQLite and Cloud Data Warehouses. Familiarity with normalization, indexing, and optimization techniques to ensure efficient data retrieval and storage. Experience in developing applications using frameworks like FastAPI, Flask or Django to simplify tasks like routing, database operations, and security authentication/authorization flows as examples. Familiar with tools like Jenkins, Travis CI, or GitHub Actions to automate the building, testing, and deployment of applications throughout the CI/CD lifecycle. Proactively identify challenges and bottlenecks, employing strong troubleshooting skills to address them. Soft Skills Required : Strong verbal and written communication skills. Energetic, self-directed, and comfortable in a fast-paced environment. Team player with good interpersonal skills and quick to learn. Show more Show less

Posted 1 month ago

Apply

2.0 years

8 - 9 Lacs

Bengaluru

On-site

About this role: Wells Fargo is seeking a Risk Analytics Consultant. In this role, you will: Participate in less complex analysis and modeling initiatives, and identify opportunity for process production, data reconciliation, and model documentation improvements within Risk Management Review and analyze programing models to extract data, and manipulate databases to provide statistical and financial modeling, and exercise independent judgment to guide new and existing projects with medium risk deliverables Coordinate and consolidate the production of monthly, quarterly, and annual performance reports for more experienced management Present recommendations for resolving data reconciliation, production, and database issues Exercise independent judgment while developing expertise in policy governance, risk projects, and regulatory requests Collaborate and consult with peers, managers, experienced managers, compliance, including various lines of business Required Qualifications: 2+ years of Risk Analytics experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 2+ years of experience as a Python Developer on strong projects Bachelor's or master's degree in computer science, Software Engineering, Data Science, or a related quantitative field. In-depth understanding of the Python software development stacks, ecosystems, frameworks, and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with popular Python frameworks such as Django, Flask or others. Good database skills including SQL. Excellent problem-solving ability with solid communication and collaboration skills. Self-starter, takes initiative, and has a strong desire to learn and grow. Proficiency in data science, machine learning, and NLP tools. Understanding of generative AI concepts. Financial industry experience, especially in risk management, is a plus. Strong communication skills. Experience in large, global financial services organizations preferred. Bachelor's or Master's in engineering or a relevant field. Job Expectations: Experienced in developing professional relationships with business leaders and colleagues in the financial sector. Leverage advanced analytics and AI to transform risk management reporting processes. Develop automated solutions to streamline report generation, improve data accuracy, and uncover deeper insights from complex market and counterparty risk data. Work collaboratively across risk management, credit, and technology departments to identify opportunities for automation, optimize data pipelines, and ensure that reporting aligns with evolving regulatory and business requirements. Design innovative dashboards and integrate AI-driven analytics into regular reporting cycles. Maintain robust data governance standards to ensure the integrity and security of risk data. Adopt machine learning and natural language processing to enable more timely, actionable, and insightful reporting that empowers stakeholders to make better-informed decisions. Translate business questions into data-driven solutions, deploy scalable AI models, and communicate findings to both technical and non-technical audiences. Design and implement machine learning models and algorithms to address business challenges within financial risk management. Participate in regular meetings and provide AI / ML related updates. Collaborate with technology teams to deliver AI and machine learning solutions, handling documentation, testing, and integration. Able to work under pressure while handling large volumes of information and multiple priorities. Posting End Date: 5 Aug 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

🌟 Position Overview We are hiring an experienced Python Developer with deep expertise in AI automation , workflow orchestration , and custom model development to build next-generation healthcare applications. This role is ideal for developers who love training, modifying, and deploying production-grade AI systems while navigating the challenges of compliance, data governance, and real-time performance in healthcare environments. 🧠 Key Responsibilities🤖 AI Workflow Automation & Orchestration Build intelligent automation pipelines for clinical workflows Orchestrate real-time and batch AI workflows using Airflow , Prefect , or Dagster Develop event-driven architectures and human-in-the-loop validation layers Automate data ingestion, processing, and inference pipelines for medical data 🧪 Custom AI Model Development Train custom NLP, CV, or multi-modal models for medical tasks Fine-tune open-source models for domain-specific adaptation Use transfer learning on small datasets for clinical use Build ensemble learning systems for diagnostic accuracy 🧬 Open-Source Model Adaptation Modify models from Hugging Face, Meta, or Google for medical understanding Build custom tokenizers for EMR/EHR and medical terminology Apply quantization, pruning , and other inference optimizations Develop custom loss functions , training loops , and architectural variations ⚙️ MLOps & Deployment Create training and CI/CD pipelines using MLflow, Kubeflow, or W&B Scale distributed training across multi-GPU environments (Ray/Horovod) Deploy models with FastAPI/Flask , supporting both batch and real-time inference Monitor for model drift , performance degradation, and compliance alerts 🏥 Healthcare AI Specialization Build systems for: Medical text classification & entity recognition Radiology report generation Clinical risk prediction Auto-coding & billing Real-time care alerts Ensure HIPAA compliance in all stages of model lifecycle ✅ Required Qualifications💻 Technical Skills 4–6 years of Python development 2+ years in ML/AI with deep learning frameworks (PyTorch/TensorFlow) Experience modifying open-source transformer models Strong expertise in workflow orchestration tools (Airflow, Prefect, Dagster) Hands-on with MLOps tools (MLflow, W&B, SageMaker, DVC) 🔍 Core Competencies Strong foundation in transformers , NLP , and CV Experience in distributed computing, GPU programming, and model compression Ability to explain and interpret model decisions (XAI, SHAP, LIME) Familiarity with containerized deployments (Docker, K8s) 🧰 Technical Stack Languages : Python 3.9+, CUDA ML Frameworks : PyTorch, TensorFlow, Hugging Face, ONNX Workflow Tools : Airflow, Prefect, Dagster MLOps : MLflow, Weights & Biases, SageMaker Infra : AWS, Kubernetes, GPU Clusters Data : Spark, Dask, Pandas Databases : PostgreSQL, MongoDB, Delta Lake, S3 Versioning : Git, DVC 🌟 Preferred Qualifications Healthcare experience (EHR, medical NLP, radiology, DICOM, FHIR) Knowledge of federated learning , differential privacy , and AutoML Experience with multi-modal , multi-task , or edge model deployment Contributions to open-source projects , or research publications Knowledge of explainable AI and responsible ML practices 🎯 Key Projects You'll Work On Real-time clinical documentation automation Custom NER models for ICD/CPT tagging LLM adaptation for medical conversation understanding Real-time risk stratification pipelines for hospitals 🎁 What We Offer Comprehensive health, plans Flexible work options (remote/hybrid) with quarterly in-person meetups 📤 Application Requirements Resume with ML/AI experience GitHub or portfolio links (model code, notebooks, demos) Cover letter describing your AI workflow or custom model build Code samples (open-source or private repos) Optional: Research papers, Kaggle profile, open challenges 🧪 Interview Process Initial HR screening (30 mins) Take-home Python + ML coding challenge Technical ML/AI deep-dive (90 mins) Model training/modification practical (2 hrs) System design for ML pipeline (60 mins) Presentation or walkthrough of past AI work (45 mins) Culture fit + final discussion References + offer 🏥 About the Role You will help build AI that doesn’t just analyze data — it augments clinical decisions , automates medical documentation, and assists doctors in real-time. This role will impact thousands of patients and redefine how AI powers healthcare. Aarna Tech Consultants Pvt. Ltd. (Atcuality) is an equal opportunity employer. We believe in diversity, ethics, and inclusive AI systems for healthcare.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Key Skills We’re Looking For Machine Learning Mastery Strong experience with regression, classification, clustering, regularization, and model tuning. Familiarity with CNNs, RNNs, Transformers is a plus. Python Programming Proficiency in NumPy, Pandas, Scikit-learn. Ability to build ML models from scratch and write clean, production-grade code. SQL & Data Handling Able to query large datasets efficiently and apply rigorous data validation. Statistical & Analytical Thinking Solid grasp of hypothesis testing, confidence intervals, and interpreting results in business context. ML System Design Experience building ML pipelines and deploying models in production (REST APIs, batch jobs, etc.). Understanding of monitoring and retraining models post-deployment. 🚀 Bonus If You Have: Experience with distributed data processing (Spark, Dask, etc.) MLOps tools like MLflow, Airflow, or SageMaker Exposure to cloud platforms (AWS, Azure, GCP)

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We have an urgent requirement for Python Developers (3-5 years of exp). Job Description: Our client’s data platform consists of a series of Python microservices, which become integrated through REST and RabbitMQ. We are actively searching for an enthusiastic Core Python Developer to become a valuable member of our vibrant team. The selected individual will play a pivotal role in advancing our platform, making it more feature-rich, robust, streamlined, and performant by employing innovative architectural and development principles. Responsibilities: Python Development: Write clean, maintainable, and efficient Python code in line with common Pythonic principles. Develop and maintain Python microservices, ensuring seamless integration with our existing platform stack. Utilize common Python data libraries, such as Pandas, Polars, NumPy, and SciPy for data manipulation and analysis tasks. Familiarity with asynchronous programming in Python using libraries like asyncio or Dask and understanding of concurrency and parallelism concepts. Write unit tests for developed code using pytest to ensure functionality and reliability. Collaborate with the QA team to ensure comprehensive test coverage. API Development & Integration: Design and develop RESTful APIs using frameworks like FastAPI and Flask. Ensure smooth communication between microservices via REST and message brokers like RabbitMQ. Messaging/Caching Systems: Understand and work with pub/sub architectures and are familiar with message brokers including RabbitMQ and Kafka. Implement and manage caching solutions using Redis to enhance application performance. Version Control: Use Git for source code management, adhering to best practices for branching, merging, and collaborative development. Database Operations: Work with database technologies such as PostgreSQL and SQLite, understanding schema design, querying, and optimization. Technical Skills Required : Bachelor’s degree in Computer Science, Engineering, or a related field. 3-5 years minimum as a Python Developer with a solid understanding of the Python language and its best practices, adhering to common Pythonic principles. Development experience within the paradigms of microservices, cloud technologies and modern containerization platforms, e.g. AWS, Azure, GCP, Docker, and Kubernetes. Proficient in implementing and managing Redis as an in-memory data structure store, used for caching, session management, and real-time analytics. Familiarity with Redis data types, such as strings, lists, sets, and hashes, and their appropriate use cases. Strong understanding of RabbitMQ as a message broker, facilitating asynchronous processing and inter-service communication. Familiarity with RabbitMQ's exchange types, routing, and queue bindings, and the ability to troubleshoot common RabbitMQ issues. Proficient in writing tests using libraries like pytest or unit test to ensure code reliability and functionality. Experience with Object-Relational Mapping tools like SQLAlchemy or Django ORM, simplifying database operations and queries. Strong understanding of relational database concepts, with hands-on experience in designing, querying, and managing data using PostgreSQL, SQLite and Cloud Data Warehouses. Familiarity with normalization, indexing, and optimization techniques to ensure efficient data retrieval and storage. Experience in developing applications using frameworks like FastAPI, Flask or Django to simplify tasks like routing, database operations, and security authentication/authorization flows as examples. Familiar with tools like Jenkins, Travis CI, or GitHub Actions to automate the building, testing, and deployment of applications throughout the CI/CD lifecycle. Proactively identify challenges and bottlenecks, employing strong troubleshooting skills to address them. Soft Skills Required : Strong verbal and written communication skills. Energetic, self-directed, and comfortable in a fast-paced environment. Team player with good interpersonal skills and quick to learn.

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Backend Developer Summary: We are looking for a talented and experienced backend developer to join our team. The ideal candidate will have a strong understanding of Python and related technologies, as well as experience with web frameworks such as Flask or Django. They will also have a working knowledge of both SQL and NoSQL databases. Responsibilities: Design, develop, and maintain backend systems using Python Work with front end developers and other developers to build and deploy scalable and reliable web applications Troubleshoot and debug applications In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch and able to conceive and write basic level of algorithms Implement security and data protection measures Optimize application performance and scalability Stay up-to-date on the latest Python technologies and trends Qualifications: Bachelor&aposs degree in Computer Science or a related field 2+ years of experience in backend development using Python Experience with web frameworks such as Flask or Django Working knowledge of both SQL and NoSQL databases Experience with cloud platforms such as AWS or Azure Strong problem-solving and analytical skills Excellent communication and collaboration skills Bonus Points: Experience with machine learning or artificial intelligence, Experience with DevOps practices, and Experience with open source software. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Backend Developer Summary: We are looking for a talented and experienced backend developer to join our team. The ideal candidate will have a strong understanding of Python and related technologies, as well as experience with web frameworks such as Flask or Django. They will also have a working knowledge of both SQL and NoSQL databases. Responsibilities: · Design, develop, and maintain backend systems using Python · Work with front end developers and other developers to build and deploy scalable and reliable web applications · Troubleshoot and debug applications In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch and able to conceive and write basic level of algorithms · Implement security and data protection measures · Optimize application performance and scalability · Stay up-to-date on the latest Python technologies and trends Qualifications: · Bachelor's degree in Computer Science or a related field · 2+ years of experience in backend development using Python · Experience with web frameworks such as Flask or Django · Working knowledge of both SQL and NoSQL databases · Experience with cloud platforms such as AWS or Azure · Strong problem-solving and analytical skills · Excellent communication and collaboration skills Bonus Points: Experience with machine learning or artificial intelligence, Experience with DevOps practices, and Experience with open source software.

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Python GEN AI Engineer: Proven experience ( 5 to 8 years) as a Python Developer or similar role, with a strong portfolio of Python-based projects and applications. Proficiency in Python programming language and its standard libraries, frameworks, and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Must Have experience in GEN AI(Generative AI) Experience REST API libraries and frameworks such as Django, Flask, SQLAlchemy. Solid understanding of object-oriented programming ( OOP ) principles, data structures, and algorithms. Experience with database design, SQL , and ORM frameworks (e.g., SQLAlchemy, Django ORM). Familiarity with front-end technologies such as HTML, CSS, JavaScript , and client-side frameworks (e.g., React, Angular , Vue.js). Knowledge of version control systems (e.g., Git ) and collaborative development workflows (e.g., GitHub, GitLab). Strong analytical and problem-solving skills, with a keen attention to detail and a passion for continuous improvement. Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team environment and communicate technical concepts to non-technical stakeholders.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies