Jobs
Interviews

239 Dask Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Objectives Of This Role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your Tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required Skills And Qualifications 2+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects Job Description We are looking for a Research Scientist with a data-centric mindset to join our applied research and innovation team. The ideal candidate will have a strong background in machine learning, deep learning, operationalization of AI/ML and process automation. You will be responsible for analyzing data, researching the most appropriate techniques, and the development, testing, support and delivery of proof of concepts to resolve real-world and large-scale challenging problems. Job Responsibilities Develop and apply machine learning innovations with minimal technical supervision. Understand the requirements from stakeholders and be able to communicate results and conclusions in a way that is accurate, clear and winsome. Perform feasibility studies and analyse data to determine the most appropriate solution. Work on many different data challenges, always ensuring a combination of simplicity, scalability, reproducibility and maintainability within the ML solutions and source code. Both data and software must be developed and maintained with high-quality standards and minimal defects. Collaborate with other technical fellows on the integration and deployment of ML solutions. To work as a member of a team, encouraging team building, motivation and cultivating effective team relations. Qualifications Essential Requirements Bachelor's degree in Computer Science or an equivalent numerate discipline Demonstrated senior experience in Machine Learning, Deep Learning & other AI fields Experience working with large datasets, production-grade code & operationalization of ML solutions EDA analysis & practical hands-on experience with datasets, ML models (Pytorch or Tensorflow) & evaluations Able to understand scientific papers & develop the idea into executable code Analytical mindset, problem solving & logical thinking capabilities Proactive attitude, constructive, intellectual curiosity & persistence to find answers to questions A high level of interpersonal & communication skills in English & strong ability to meet deadlines Python, Pytorch, Git, pandas, dask, polars, sklearn, huggingface, docker, databricks Desired Skills Masters degree &/or specialization courses in AI/ML. PhD in science is an added value Experience in MLOPs (MLFlow, Prefect) & deployment of AI/ML solutions to the cloud (Azure preferred) Understanding & practice of LLMs & Generative AI (prompt engineering, RAG) Experience with Robotic Process Automation, Time Series Forecasting & Predictive modeling A practical grasp of databases (SQL, ElasticSearch, Pinecone, Faiss) Previous experience in retail, consumer, ecommerce, business, FMCG products (NielsenIQ portfolio) Additional Information With @NielsenIQ, we’re now an even more diverse team of 40,000 people – each with their own stories Our increasingly diverse workforce empowers us to better reflect the diversity of the markets we measure. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 month ago

Apply

0.0 - 3.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: . Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have:. Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Computer Vision Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Computer Vision (CV) Engineers with expertise in image/video processing, object detection, and generative AI. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize computer vision models for tasks like object detection, image segmentation, and multi-object tracking. Lead research on novel techniques using deep learning frameworks (TensorFlow, PyTorch, JAX). Build efficient computer vision pipelines and optimize models for real-time performance. Deploy models using microservices (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Lead MLOps practices, including CI/CD pipelines, model versioning, and training optimizations. Required Skills: . Expert in Python, OpenCV, NumPy, and deep learning architectures (e.g., ViTs, YOLO, Mask R-CNN). Strong knowledge in computer vision fundamentals, including feature extraction and multi-view geometry with experience in deploying and optimizing models with TensorRT, Open VINO, and cloud/edge solutions. Proficient with MLOps tools (ML flow, DVC), CI/CD, and distributed training frameworks. Experience in 3D vision, AR/VR, or LiDAR processing is a plus. Nice to Have:. Experience with multi-camera vision systems, LiDAR, sensor fusion, and reinforcement learning for vision tasks. Exposure to generative AI models (e.g., Stable Diffusion, GANs) and large-scale image processing (Apache Spark, Dask). Research publications or patents in computer vision and deep learning. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Computer Vision Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Computer Vision (CV) Engineers with expertise in image/video processing, object detection, and generative AI. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize computer vision models for tasks like object detection, image segmentation, and multi-object tracking. Lead research on novel techniques using deep learning frameworks (TensorFlow, PyTorch, JAX). Build efficient computer vision pipelines and optimize models for real-time performance. Deploy models using microservices (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Lead MLOps practices, including CI/CD pipelines, model versioning, and training optimizations. Required Skills: . Expert in Python, OpenCV, NumPy, and deep learning architectures (e.g., ViTs, YOLO, Mask R-CNN). Strong knowledge in computer vision fundamentals, including feature extraction and multi-view geometry with experience in deploying and optimizing models with TensorRT, Open VINO, and cloud/edge solutions. Proficient with MLOps tools (ML flow, DVC), CI/CD, and distributed training frameworks. Experience in 3D vision, AR/VR, or LiDAR processing is a plus. Nice to Have:. Experience with multi-camera vision systems, LiDAR, sensor fusion, and reinforcement learning for vision tasks. Exposure to generative AI models (e.g., Stable Diffusion, GANs) and large-scale image processing (Apache Spark, Dask). Research publications or patents in computer vision and deep learning. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

0.0 - 3.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: . Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have:. Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Computer Vision Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Computer Vision (CV) Engineers with expertise in image/video processing, object detection, and generative AI. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize computer vision models for tasks like object detection, image segmentation, and multi-object tracking. Lead research on novel techniques using deep learning frameworks (TensorFlow, PyTorch, JAX). Build efficient computer vision pipelines and optimize models for real-time performance. Deploy models using microservices (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Lead MLOps practices, including CI/CD pipelines, model versioning, and training optimizations. Required Skills: . Expert in Python, OpenCV, NumPy, and deep learning architectures (e.g., ViTs, YOLO, Mask R-CNN). Strong knowledge in computer vision fundamentals, including feature extraction and multi-view geometry with experience in deploying and optimizing models with TensorRT, Open VINO, and cloud/edge solutions. Proficient with MLOps tools (ML flow, DVC), CI/CD, and distributed training frameworks. Experience in 3D vision, AR/VR, or LiDAR processing is a plus. Nice to Have:. Experience with multi-camera vision systems, LiDAR, sensor fusion, and reinforcement learning for vision tasks. Exposure to generative AI models (e.g., Stable Diffusion, GANs) and large-scale image processing (Apache Spark, Dask). Research publications or patents in computer vision and deep learning. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

0.0 - 3.0 years

4 - 8 Lacs

Mumbai

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: . Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have:. Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

0.0 - 3.0 years

4 - 8 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: . Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have:. Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Kolkata

Work from Office

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Computer Vision Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Computer Vision (CV) Engineers with expertise in image/video processing, object detection, and generative AI. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Develop and optimize computer vision models for tasks like object detection, image segmentation, and multi-object tracking. Lead research on novel techniques using deep learning frameworks (TensorFlow, PyTorch, JAX). Build efficient computer vision pipelines and optimize models for real-time performance. Deploy models using microservices (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Lead MLOps practices, including CI/CD pipelines, model versioning, and training optimizations. Required Skills: . Expert in Python, OpenCV, NumPy, and deep learning architectures (e.g., ViTs, YOLO, Mask R-CNN). Strong knowledge in computer vision fundamentals, including feature extraction and multi-view geometry with experience in deploying and optimizing models with TensorRT, Open VINO, and cloud/edge solutions. Proficient with MLOps tools (ML flow, DVC), CI/CD, and distributed training frameworks. Experience in 3D vision, AR/VR, or LiDAR processing is a plus. Nice to Have:. Experience with multi-camera vision systems, LiDAR, sensor fusion, and reinforcement learning for vision tasks. Exposure to generative AI models (e.g., Stable Diffusion, GANs) and large-scale image processing (Apache Spark, Dask). Research publications or patents in computer vision and deep learning. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Location: Gurugram, India Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back Testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Itos Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning). Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C /Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Python developers can take on various job roles, such as back-end web developer, data scientist, automation engineer and machine learning engineer. The work you will do includes: Work on various software projects using Python as their primary programming language. Get involved in developing desktop applications, command-line tools, automation scripts, or backend services. Responsibilities include writing clean, efficient, and maintainable code, collaborating with other team members, and participating in the software development lifecycle. Build web applications using Python along with other technologies such as Django, Flask, or FastAPI to build dynamic websites and web applications. Work on back-end aspects of web development, implementing features, optimizing performance, and ensuring the security of web applications Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in writing scalable code in Python programming language with the knowledge of at least one Python web framework like Django, Flask, Fast APIs, etc. Experience in Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Experience in database technologies such as SQL and NoSQL with ORM implementations. Experience in event-driven programming in Python Proficient understanding of code versioning tools such as Git, SVN, Bitbucket etc. Experience in any one of the cloud computing platforms like AWS, Azure or GCP Experience in Agile/SAFE Agile project development methodologies Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Proficiency in Python and machine learning libraries, experience with deep learning frameworks (e.g., TensorFlow, PyTorch), knowledge of software engineering principles, understanding of cloud computing platforms and deployment pipelines, familiarity with DevOps practices Understanding of Understanding of Experience in Experience with Experience in Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with Python Location: The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300298

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Python developers can take on various job roles, such as back-end web developer, data scientist, automation engineer and machine learning engineer. The work you will do includes: Work on various software projects using Python as their primary programming language. Get involved in developing desktop applications, command-line tools, automation scripts, or backend services. Responsibilities include writing clean, efficient, and maintainable code, collaborating with other team members, and participating in the software development lifecycle. Build web applications using Python along with other technologies such as Django, Flask, or FastAPI to build dynamic websites and web applications. Work on back-end aspects of web development, implementing features, optimizing performance, and ensuring the security of web applications Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in writing scalable code in Python programming language with the knowledge of at least one Python web framework like Django, Flask, Fast APIs, etc. Experience in Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Experience in database technologies such as SQL and NoSQL with ORM implementations. Experience in event-driven programming in Python Proficient understanding of code versioning tools such as Git, SVN, Bitbucket etc. Experience in any one of the cloud computing platforms like AWS, Azure or GCP Experience in Agile/SAFE Agile project development methodologies Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Proficiency in Python and machine learning libraries, experience with deep learning frameworks (e.g., TensorFlow, PyTorch), knowledge of software engineering principles, understanding of cloud computing platforms and deployment pipelines, familiarity with DevOps practices Understanding of Understanding of Experience in Experience with Experience in Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with Python Location: The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300298

Posted 1 month ago

Apply

5.0 years

3 - 8 Lacs

Hyderābād

On-site

Python Developer with Pyspark Hyderabad Role - Fulltime Availability to Join - Immediate Years of experience - 5+ Required skills and qualifications 5+ years of experience as a Python Developer with a strong portfolio of projects. Experience working in Pyspark is must Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills.

Posted 1 month ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 1 month ago

Apply

2.0 - 3.0 years

2 - 8 Lacs

Bengaluru

On-site

Description for Internal Candidates The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments, and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. We commit people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Our people are our greatest asset – we say it often and with good reason. It is only with the determination and dedication of our people that we can serve our clients, generate long-term value for our shareholders and contribute to the broader public. We take pride in supporting each colleague both professionally and personally. From collaborative workspaces and ergonomic services to wellbeing and resilience offerings, we offer our people the flexibility and support they need to reach their goals in and outside the office. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving market, Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. Market Risk Engineering has an opportunity for an Associate level Software Engineer to work across a broad range of applications and extremely diverse set of technologies to keep the suite operating at peak efficiency. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. WHAT WE LOOK FOR Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 2 -3years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end

Posted 1 month ago

Apply

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Description for Internal Candidates The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments, and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. We commit people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Our people are our greatest asset – we say it often and with good reason. It is only with the determination and dedication of our people that we can serve our clients, generate long-term value for our shareholders and contribute to the broader public. We take pride in supporting each colleague both professionally and personally. From collaborative workspaces and ergonomic services to wellbeing and resilience offerings, we offer our people the flexibility and support they need to reach their goals in and outside the office. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving market, Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. Market Risk Engineering has an opportunity for an Associate level Software Engineer to work across a broad range of applications and extremely diverse set of technologies to keep the suite operating at peak efficiency. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 2 -3years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end

Posted 1 month ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Pune

Work from Office

At BNY, our culture empowers you to grow and succeed As a leading global financial services company at the center of the worlds financial system we touch nearly 20% of the worlds investible assets Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere, We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace Through our unique ideas and talents, together we help make money work for the world This is what is all about, Were seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team This role is located in Pune, MH or Chennai, TN (Hybrid), In this role, youll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (?DART?) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation The team works closely with various groups across BNY to support the firms Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test, The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework, Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency, To be successful in this role, were seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask, Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc ), clustering (k-means, etc ), and time series analysis (ARIMA, EWMA, GARCH, etc ) is a plus, Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills

Posted 1 month ago

Apply

12.0 - 18.0 years

0 Lacs

Tamil Nadu, India

Remote

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation.

Posted 1 month ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

As an Applied AI Engineer, you will work at the intersection of AI research and practical implementation. You will develop machine learning (ML) and deep learning (DL) models, integrate them into our SaaS platform, and optimize them for scalability, performance, and business impact. Key Responsibilities: Data Engineering & Feature Engineering: Work with structured and unstructured data to build high-quality datasets. Develop robust feature engineering pipelines to improve model accuracy. Implement data preprocessing and augmentation techniques. MLOps & AI Infrastructure: Build and maintain ML pipelines for continuous integration and deployment (CI/CD). Implement model monitoring, retraining, and performance tracking frameworks. Work with cloud platforms (AWS, GCP, Azure) for AI model deployment and scaling. AI Integration in SaaS Applications: Collaborate with software engineers to integrate AI models into customer-facing SaaS products. Develop APIs and microservices for seamless AI-powered functionalities. Optimize inference performance for real-time and batch processing scenarios. Collaboration & Research: Stay updated with the latest AI research and bring innovative solutions to production. Work closely with product managers, designers, and engineers to align AI capabilities with business goals. Participate in code reviews, knowledge sharing, and AI/ML best practices. Prompt Engineering: Design, develop, and refine AI-generated text prompts for various applications to ensure accuracy, engagement, and relevance Craft and optimize prompts that guide our AI systems to generate accurate, informative, and creative outputs Build accessible libraries of prompts, keywords, and syntax guidelines for optimal query results Develop robust evaluation frameworks to assess AI model performance and prompt effectiveness Test and analyze outputs by experimenting with different prompts and measuring against defined metrics Create dashboards and reporting tools to track AI performance across multiple dimensions Apply human judgment to identify gaps in AI-generated output and refine prompts accordingly Implement continuous improvement processes based on evaluation insights What Were Looking For: Experience: 5-6 years in AI/ML engineering, with hands-on experience in Generative AI, NLP and deploying AI solutions at scale. Technical Expertise: Strong background in machine learning, deep learning, and NLP. Proficiency in Python Help assess performance metrics, develop novel agent frameworks, create and oversee data workflows, and conduct extensive testing to deploy innovative capabilities. Ship with high intent and work with the team to improve your ability to iterate and ship AI-powered features over time. Experience with data processing tools (Pandas, NumPy, Spark, Dask). Familiarity with MLOps, CI/CD, and cloud platforms (AWS SageMaker, GCP Vertex AI, Azure ML). AI Deployment & Optimization: Experience with optimizing models for performance, interpretability, and real-time applications. Strong Problem-Solving Skills: Ability to translate business problems into AI-driven solutions. SaaS Experience (Preferred): Understanding of how AI enhances SaaS applications and workflows.

Posted 1 month ago

Apply

0.0 years

4 - 5 Lacs

Gurgaon

On-site

Location Gurugram, India Share Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork. Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers. Collaborate with back-end web developers and programmers to improve usability. Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly. Converting the jobs from Talend ETL to Python and convert Lead SQLS to Snowflake. Developers with Python and SQL Skills. Developers should be proficient in Python (especially Pandas, PySpark, or Dask) for ETL scripting, with strong SQL skills to translate complex queries. They need expertise in Snowflake SQL for migrating and optimizing queries, as well as experience with data pipeline orchestration (e.g., Airflow) and cloud integration for automation and data loading. Familiarity with data transformation, error handling, and logging is also essential. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Senior Data Scientist — Gen AI/ML Expert Location: Hybrid — Gurugram Company: Mechademy – Industrial Reliability & Predictive Analytics About Mechademy At Mechademy, we are redefining the future of reliability in rotating machinery with our flagship product, Turbomechanica . Built at the intersection of physics-based models, AI, and machine learning , Turbomechanica delivers prescriptive analytics that detect potential equipment issues before they escalate, maximizing uptime, extending asset life, and reducing operational risks for our industrial clients. The Role We are seeking a talented and driven Senior Data Scientist (AI/ML) with 3+ years of experience to join our AI team. You will play a critical role in building scalable ML pipelines, integrating cutting-edge language models, and developing autonomous agent-based systems that transform predictive maintenance is done for industrial equipment. This is a highly technical and hands-on role, with strong emphasis on real-world AI deployments — working directly with sensor data, time-series analytics, anomaly detection, distributed ML, and LLM-powered agentic workflows . What Makes This Role Unique Work on real-world industrial AI problems , combining physics-based models with modern ML/LLM systems. Collaborate with domain experts, engineers, and product leaders to directly impact critical industrial operations. Freedom to experiment with new tools, models, and techniques — with full ownership of your work. Help shape our technical roadmap as we scale our AI-first predictive analytics platform. Flexible hybrid work culture with high-impact visibility. Key Responsibilities Design & Develop ML Pipelines: Build scalable, production-grade ML pipelines for predictive maintenance, anomaly detection, and time-series analysis. Distributed Model Training: Leverage distributed computing frameworks (e.g. Ray, Dask, Spark, Horovod) for large-scale model training. LLM Integration & Optimization: Fine-tune, optimize, and deploy large language models (Llama, GPT, Mistral, Falcon, etc.) for applications like summarization, RAG (Retrieval-Augmented Generation), and knowledge extraction. Agent-Based AI Pipelines: Build intelligent multi-agent systems capable of reasoning, planning, and executing complex tasks via tool usage, memory, and coordination. End-to-End MLOps: Own the full ML lifecycle — from research, experimentation, deployment, monitoring to production optimization. Algorithm Development: Research, evaluate, and implement state-of-the-art ML/DL/statistical algorithms for real-world sensor data. Collaborative Development: Work closely with cross-functional teams including software engineers, domain experts, product managers, and leadership. Core Requirements 3+ years of professional experience in AI/ML, data science, or applied ML engineering. Strong hands-on experience with modern LLMs (Llama, GPT series, Mistral, Falcon, etc.), fine-tuning, prompt engineering, and RAG techniques. Familiarity with frameworks like LangChain, LlamaIndex , or equivalent for LLM application development. Practical experience in agentic AI pipelines : tool use, sequential reasoning, and multi-agent orchestration. Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and at least one deep learning framework (TensorFlow, PyTorch, or JAX). Exposure to distributed ML frameworks (Ray, Dask, Horovod, Spark ML, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong problem-solving ability, ownership mindset, and ability to work in fast-paced startup environments. Excellent written and verbal communication skills. Bonus / Good to Have Experience with time-series data, sensor data processing, and anomaly detection. Familiarity with CI/CD pipelines and MLOps best practices. Knowledge of cloud deployment, real-time system optimization, and industrial data security standards. Prior open-source contributions or active GitHub projects. What We Offer Opportunity to work on cutting-edge technology transforming industrial AI. Direct ownership, autonomy, and visibility into product impact. Flexible hybrid work culture. Professional development budget and continuous learning opportunities. Collaborative, fast-moving, and growth-oriented team culture. Health benefits and performance-linked rewards. Potential for equity participation for high-impact contributors. Note: Title and compensation will be aligned with the candidate’s experience and potential impact. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies