Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary We are seeking a highly skilled and motivated Computer Vision Engineer with strong expertise in Python, image processing, and algorithm design. The ideal candidate should be adept at developing fault-tolerant systems and optimizing data pipelines. This role requires an innovative thinker who thrives in a startup environment and is passionate about building scalable and efficient solutions. Skills Required Strong experience in Python and writing fault-tolerant code Extensive expertise in Image Processing, OpenCV, SciPy, and NumPy Good understanding of optimizing data processing pipelines Command over geometry, statistics, and designing complex algorithms Ability to work and thrive in a startup environment, learn rapidly, and master diverse web technologies and techniques Roles and Responsibilities Leverage open-source code and libraries to quickly experiment and build novel solutions Independently think of solutions to complex requirements; possess exceptional logical skills Analyze current products in development, including performance, diagnosis, and troubleshooting Work with the existing framework and help evolve it by building reusable code and libraries Search for and introduce new software-related technologies, processes, and tools to the team Brownie Points Knowledge of Keras, TensorFlow, PyTorch, Ludwig, TensorRT Experience with Nvidia DeepStream Knowledge of NLP and NLU (NLTK, SpaCy, Flair, etc.) Understanding of Docker, Kubernetes, and Git Familiarity with JavaScript What We Have to Offer Work with a performance-oriented team driven by ownership and open to experimenting with cutting-edge technologies Learn to design systems for high accuracy, efficiency, and scalability Meritocracy-driven, candid startup culture Show more Show less
Posted 2 months ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Looking for a company that inspires passion, courage and creativity, where you can be on the team shaping the future of global commerce? Want to shape how millions of people buy, sell, connect, and share around the world? If you’re interested in joining a purpose driven community that is dedicated to crafting an ambitious and inclusive work environment, join eBay – a company you can be proud to be with. Role Overview: The India Analytics Center is responsible for delivering business insights and high impact analyses to the Product and Business teams within eBay. The team addresses strategic and operational questions facing the business including user behavior, performance measurement, and product and marketing efficiency. We are currently looking for an Associate Analytics Manager to join us. We are looking for a versatile and experienced Data & Business Intelligence Analyst to join our growing analytics team. This role will be pivotal in bridging the gap between raw data and actionable business insights. You will be responsible for building and maintaining data pipelines, developing robust BI solutions, and leveraging your programming skills to automate processes and perform advanced analysis. You will collaborate closely with stakeholders to understand their data needs and deliver impactful solutions that drive data-driven decision-making. Primary Job Responsibilities Data Engineering & Pipeline Development: Design, build, and maintain scalable and reliable data pipelines using Python and related technologies to extract, transform, and load (ETL) data from various sources. Data Modeling & Warehousing: Participate in the design and implementation of data models and data warehousing solutions to ensure efficient data storage and retrieval for analytical purposes. Business Intelligence Development: Develop and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Tableau, Power BI, Looker) to monitor key business metrics and provide actionable insights. Python for Data Analysis & Automation: Utilize Python and relevant libraries (e.g., Pandas, NumPy, SciPy) to perform in-depth data analysis, statistical modeling, and automate reporting and data processing tasks. Requirements Gathering & Stakeholder Collaboration: Work closely with business stakeholders to understand their data and reporting requirements and translate them into technical specifications. Data Quality & Governance: Implement and monitor data quality processes to ensure accuracy, consistency, and integrity of data used for analysis and reporting. Adhere to data governance policies. Performance Optimization: Identify and implement optimizations to data pipelines and BI solutions to improve performance and efficiency. Documentation & Knowledge Sharing: Create and maintain comprehensive documentation for data pipelines, BI solutions, and analytical processes. Share knowledge and best practices with the team. Ad-hoc Analysis & Problem Solving: Conduct ad-hoc data analysis to answer specific business questions and troubleshoot data-related issues. Qualifications Bachelor's degree in Computer Science, Engineering, Statistics, Mathematics, or a related quantitative field. 4-7 years of hands-on experience in a data-focused role with significant responsibilities in data engineering, business intelligence, and utilizing Python for data tasks. Product development exposure: Experience working on analytics or data products throughout the product lifecycle — from requirements to delivery. Knowledge of Python and relevant data manipulation and analysis libraries (e.g., Pandas, NumPy). Experience with libraries for data visualization (e.g., Matplotlib, Seaborn) is a plus. Solid understanding of relational databases (e.g., SQL Server, PostgreSQL, MySQL) and excellent SQL skills for data querying and manipulation. Experience designing, building, and maintaining ETL pipelines. Proven ability to develop compelling and insightful data visualizations using BI tools such as Tableau, Power BI, or Looker (specify preferred tool if applicable). Proven ability to support and improve analytical products that drive business decisions Familiarity with data warehousing concepts and different data modeling techniques. Strong analytical and problem-solving skills with the ability to work with complex datasets. Good communication and collaboration skills. Experience with version control systems (e.g., Git) is a plus. Benefits are an essential part of your total compensation for the work you do every day. Whether you’re single, in a growing family, or nearing retirement, eBay offers a variety of comprehensive and competitive benefit programs to meet your needs. Including maternal & paternal leave, paid sabbatical, and plans to help ensure your financial security today and in the years ahead because we know feeling financially secure during your working years and through retirement is important. Here at eBay, we love creating opportunities for others by connecting people from widely diverse backgrounds, perspectives, and geographies. So, being diverse and inclusive isn’t just something we strive for, it is who we are, and part of what we do each and every single day. We want to ensure that as an employee, you feel eBay is a place where, no matter who you are, you feel safe, included, and that you have the opportunity to bring your unique self to work. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/company/diversity-inclusion/ Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are seeking an experienced Python Backend Engineer to join our team in building high-performance, scalable backend systems for algorithmic trading. The ideal candidate will have strong expertise in developing exchange integrations, optimizing order management systems, and ensuring low-latency execution. Responsibilities Design and develop scalable backend systems for real-time trading applications. Build and optimize order management systems with smart order routing capabilities. Integrate multiple exchange APIs(REST, WebSockets, FIX protocol) for seamless connectivity. Develop high-performance execution engines with low-latency trade execution. Implement a real-time monitoring, logging, and alerting system to ensure reliability. Design fault-tolerant and distributed architectures for handling large-scale transactions. Work on message queues (RabbitMQ, Kafka) for efficient data processing. Ensure system security and compliance with financial industry standards. Collaborate with quant researchers and business teams to implement trading logic. Requirements Strong proficiency in Python (4+ years)with a focus on backend development. Expertise in API development and integration using REST, WebSockets, and FIX protocol. Experience with asynchronous programming(asyncio, aiohttp) for high-concurrency applications. Strong knowledge of database systems(MySQL, PostgreSQL, MongoDB, Redis, time-series databases). Proficiency in containerization and orchestration(Docker, Kubernetes, AWS). Experience with message queues(RabbitMQ, Kafka) for real-time data processing. Knowledge of monitoring tools(Prometheus, Grafana, ELK Stack) for system observability. Experience with scalable system design, microservices, and distributed architectures. Experience with real-time data processing and execution. Experience developing backtesting engines capable of processing millions of events per second. Understanding of rule-based trading engines supporting multiple indicators and event processing. Experience in data processing libraries: pandas, numpy, scipy, scikit-learn, and polars. Knowledge of parallel computing frameworks(Dask) for high-performance computation. Familiarity with automated testing frameworks for trading strategies and system components. Experience in data visualization tools for trading strategy analysis and performance metrics. Knowledge of quantitative trading strategies and algorithmic trading infrastructure. Contributions to open-source backend or data engineering projects. This job was posted by Shivangi Mathur from Unifynd. Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Noida
On-site
Position: AI/ML Lead (CE80SF RM 3261) Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios Technical Skills required: Solid Experience in Time Series Analysis, Anomaly Detection and traditional machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), • Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Candidate Roles and Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. ******************************************************************************************************************************************* Job Category: Embedded HW_SW Job Type: Full Time Job Location: Noida Experience: 8+ years Notice period: 0-15 days
Posted 2 months ago
5.0 - 8.0 years
4 - 7 Lacs
Ahmedabad
On-site
Position: AI/ML Developer (CE58SF RM 3260) Candidate Roles and Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Technical Skills required: Solid Experience in Time Series Analysis, Anomaly Detection and traditional machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. ******************************************************************************************************************************************* Job Category: Embedded HW_SW Job Type: Full Time Job Location: AhmedabadIndorePune Experience: 5 - 8 Years Notice period: 0-15 days
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job role- Software Engineer- (C/C++) Location- Pune Work Mode - Hybrid mode- 3 days’ work from Office Experience- 8+ Yrs Roles & Responsibilities: - As acoustic development engineer, you will work in the Product Development team that is responsible of the Actran software development. Your main responsibilities will include: • Development of new features in Actran, matching with industrial expectations (accuracy, performance, robustness); • Participation to acoustic research topics • Recommendations on new technologies to be integrated in Actran to solve new challenges efficiently; • Interfaces with third party software when required; • Work on bug fixes; • Identify software design problems and devise elegant solutions. Quality Assurance (QA), industrial validation and software documentation benefit from daily interactions with dedicated teams. Profile • PhD in Applied Sciences, Computer Sciences (or equivalent by experience) • Programming skills in Python and C++ • Experience with a commercial structural dynamics solver (Nastran, Abaqus, Ansys, Optistruct) • Experience in programming on Linux environment • Experience in acoustic research, • Some experience in the design of complex object-oriented software : o C++: generic programming, Standard Template Library, boost libraries; o Python: C-bindings, python extension libraries, numpy, scipy, matplotlib. o Familiar with versioning system GIT, CI/CD development processes and containerization tools o Experience in Qt framework and VTK library is a plus o Basic knowledge of CAE-FEM tools (Ansa, Catia, Hypermesh) is a plus • Soft skills including being creative, self-learner autonomous, curious, capable of thinking out of the box, solution-oriented attitude, quality awareness, team spirit and flexibility • With good level of English. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Role: Model Developer Work Location: Jaipur Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 4000+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team first mindset. Headquartered in Silicon Valley, you’ll find our delivery centers across the globe and offices in multiple cities across India, the US, UK, Canada, and Singapore, including a substantial remote global workforce. We’re Great Place to Work-Certified™. Working at Tiger Analytics, you’ll be at the heart of an AI revolution. You’ll work with teams that push the boundaries of what is possible and build solutions that energize and inspire. About BFS, and your work: Tiger’s growing Banking Financial Services vertical seeks self-motivated AI/ML/Data Science professionals with domain expertise in the Banking and Financial Services space and strong technical skills for a challenging role in the Banking & Financial Services Analytics area. Responsibilities for this job include working with various clients to design, develop, and implement various Analytics and Data Science use cases. Skill Sets needed are a mix of hands-on analytics and predictive model development/validation experience and the knowhow of the domain context in similar areas along with team leading and stakeholder management experience. The role might also come with a managerial responsibility to set and review tasks, perform QC, and to supervise mentor and coach analysts. Monitor and validate aggregate model risk in alignment with bank’s risk strategy and will lead a team of Model validators, who use their predictive and AI Modeling knowledge to review and validate a wide variety of the models. Manage a growing Model Validation team responsible for independent first line validation of predictive and generative AI models. Perform independent validations of financial, statistical, and behavioral models commensurate with their criticality ratings and assist with the validation and review of models regarding their theoretical soundness, testing design, and points of weakness Interpret data to recognize any potential risk exposure. Development of challenger models that help validate existing models and assist with outcome analysis and ensure compliance with the model risk monitoring framework. Evaluate governance for Model Risk Management by reviewing policies, controls, risk assessments, documentation standards, and validation standard. About the role: This pivotal role focuses on the end-to-end development, implementation, and ongoing monitoring of both application and behavioral scorecards within our dynamic retail banking division. While application scorecard development will be the primary area of focus and expertise required, you have a scope of contributing to behavioral scorecard initiatives. The primary emphasis will be on our unsecured lending portfolio, including personal loans, overdrafts, and particularly credit cards. You will be instrumental in enhancing credit risk management capabilities, optimizing lending decisions, and driving profitable growth by leveraging advanced analytical techniques and robust statistical models. This role requires a deep understanding of the credit lifecycle, regulatory requirements, and the ability to translate complex data insights into actionable business strategies within the Indian banking context. Key Responsibilities: End-to-End Scorecard Development (Application & Behavioral): Lead the design, development, and validation of new application scorecards and behavioral scorecards from scratch, specifically tailored for the Indian retail banking landscape and unsecured portfolios (personal loans, credit cards) across ETB and NTB Segments. Should have prior experience in this area. Utilize advanced statistical methodologies and machine learning techniques, leveraging Python for data manipulation, model building, and validation. Ensure robust model validation, back-testing, stress testing, and scenario analysis to ascertain model robustness, stability, and predictive power, adhering to RBI guidelines and internal governance. Cloud-Native Model Deployment & MLOps: Drive the deployment of developed scorecards into production environments on AWS, collaborating with engineering teams to integrate models into credit origination and decisioning systems. Implement and manage MLOps practices for continuous model monitoring, re-training, and version control within the AWS ecosystem. Data Strategy & Feature Engineering: Proactively identify, source, and analyze diverse datasets (e.g., internal bank data, credit bureau data like CIBIL, Experian, Equifax) to derive highly predictive features for scorecard development. Should have prior experience in this area. Address data quality challenges, ensuring data integrity and suitability for model inputs in an Indian banking context. Performance Monitoring & Optimization: Establish and maintain comprehensive model performance monitoring frameworks, including monthly/quarterly tracking of key performance indicators (KPIs) like Gini coefficient, KS statistic, and portfolio vintage analysis. Identify triggers for model recalibration or redevelopment based on performance degradation, regulatory changes, or evolving market dynamics. Required Qualifications, Capabilities, and Skills: Education: Bachelor's or Master's degree in a quantitative discipline such as Mathematics, Statistics, Physics, Computer Science, Financial Engineering, or a related field. Experience: 3-10 years of hands-on experience in credit risk model development, with a strong focus on application scorecard development and significant exposure to behavioral scorecards, preferably within the Indian banking sector applying concepts including roll-rate analysis, swapset analysis, reject inferencing. Demonstrated prior experience in model development and deployment in AWS environments, understanding cloud-native MLOps principles. Proven track record in building and validating statistical models (e.g., logistic regression, GBDT, random forests) for credit risk. Technical Skills: Exceptional hands-on expertise in Python (Pandas, NumPy, Scikit-learn, SciPy) for data manipulation, statistical modeling, and machine learning. Proficiency in SQL for data extraction and manipulation. Familiarity with AWS services relevant to data science and machine learning (e.g., S3, EC2, SageMaker, Lambda). Knowledge of SAS is a plus, but Python is the primary requirement. Analytical & Soft Skills: Deep understanding of the end-to-end lifecycle of application and behavioral scorecard development, from data sourcing to deployment and monitoring. Strong understanding of credit risk principles, the credit lifecycle, and regulatory frameworks pertinent to Indian banking (e.g., RBI guidelines on credit risk management, model risk management). Excellent analytical, problem-solving, and critical thinking skills. Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders Show more Show less
Posted 2 months ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production. Show more Show less
Posted 2 months ago
2.5 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description The Smart Cube, a WNS company, is a trusted partner for high performing intelligence that answers critical business questions. And we work with our clients to figure out how to implement the answers, faster. Job Description Roles and responsibilities Implement Web applications in Python/Django Understanding project requirement & converting into technical requirements Independently do the coding/development of complex modules Independent contributor, high quality software within the timelines Ensure high quality releases through appropriate QC and QA activities Participate in technical discussions / reviews Works collaboratively and professionally with other associates in cross functional teams to achieve goals Apply a sense of urgency, commitment and focus on the right priorities in developing solutions in a timely fashion Independently look for solutions to problems, but keep detailed records of what assumptions and steps were taken, and be able to communicate the logic in a clear and concise manner Qualifications Ideal Candidate 2.5-3 years relevant experience in developing applications for financial services clients 3-5 years of experience implementing web application in Python/ Django Knowledge of SciPy/NumPy/Pandas libraries in Python to develop quant models AWS lambda, RabbitMQ, Celery Good understanding of SDLC phases of application development Experience in successfully delivering software projects in a variety of domains, such as retail, financial services and procurement under time/cost pressures. Experience in agile methodology approach and project management principles Must have good understanding of SDLC phases of application development Must be able to design and develop web applications using Open source technology (Python) Should have good understanding and exposure to unit testing. Should have good understanding and exposure to data structures, design patterns, design and architecture of web-based applications. Show more Show less
Posted 2 months ago
5.0 - 10.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Required skills and qualifications 5+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. MATLAB experience is must Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred skills and qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community.
Posted 2 months ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Mathematics & Statistics: Advanced knowledge of probability, statistics and linear algebra. Expertise in statistical modelling, hypothesis testing and experimental design. Machine Learning and AI: 4+ years of hands-on experience with GenAI application with RAG approach, Vector databases, LLM’s. Hands on experience with LLMs (Google Gemini, Open AI, Llama etc.), LangChain, LlamaIndex, LlamaIndex for context-augmented generative AI, and Hugging Face Transformers, Knowledge graph, and Vector Databases. Advanced knowledge of RAG techniques is required, including expertise in hybrid search methods, multi-vector retrieval, Hypothetical Document Embeddings (HyDE), self-querying, query expansion, re-ranking, and relevance filtering etc. Strong Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, scikit-learn, Scipy, Pandas and high-level APIs like Keras is essential. Advanced NLP skills, including Named Entity Recognition (NER), Dependency Parsing, Text Classification, and Topic Modeling. In-depth experience with supervised, unsupervised and reinforcement learning algorithms. Proficiency with machine learning libraries and frameworks (e.g. scikit-learn, TensorFlow, PyTorch etc.) Knowledge of deep learning, natural language processing (NLP). Hands-on experience with Feature Engineering, Exploratory Data Analysis. Familiarity and experience with Explainable AI, Model monitoring, Data/ Model Drift. Proficiency in programming languages such as Python. Experience with relational (SQL) and Vector databases. Skilled in Data wrangling, cleaning and preprocessing large datasets. Experience with natural language processing (NLP) and natural language generation (NLG). ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements - Qualifications: Bachelor’s or master’s degree with 3+ years of strong Python development experience Core skills: Bachelor’s or master’s degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, NumPy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements - Qualifications: Bachelor’s or master’s degree with 3+ years of strong Python development experience Core skills: Bachelor’s or master’s degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, NumPy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements - Qualifications: Bachelor’s or master’s degree with 3+ years of strong Python development experience Core skills: Bachelor’s or master’s degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, NumPy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 months ago
0 years
0 Lacs
India
Remote
CryptoChakra is a leading cryptocurrency analytics and education platform dedicated to decoding digital asset markets through quantitative innovation and data-driven insights. By merging advanced machine learning, real-time blockchain analytics, and immersive educational tools, we empower traders, institutions, and enthusiasts to navigate crypto volatility with precision. Our platform transforms complex data into actionable strategies, offering AI-driven forecasts, risk models, and DeFi protocol evaluations. As a remote-first innovator, we prioritize transparency, scalability, and user-centric solutions to democratize access to decentralized finance. Position: Quantitative Researcher (Crypto Markets) Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s quantitative research team to pioneer models that decode crypto market dynamics and drive algorithmic strategies. This role offers hands-on experience in statistical arbitrage, stochastic modeling, and blockchain data analysis, with mentorship from finance and cryptography experts. Key Responsibilities Algorithmic Strategy Development: Design and backtest quantitative models for arbitrage, market-making, and trend prediction using Python/R. Analyze on-chain metrics (Etherscan, Glassnode) and exchange data (Binance, Bybit) to identify alpha signals. Risk & Portfolio Modeling: Build stochastic models (Monte Carlo, GARCH) to assess volatility, liquidity risks, and portfolio optimization in crypto assets. Evaluate correlations between cryptocurrencies and traditional markets (equities, commodities). DeFi Research: Investigate yield farming strategies, liquidity pool returns, and impermanent loss in protocols like Uniswap and Curve. Insight Dissemination: Publish whitepapers and dashboards (Tableau, Power BI) to translate quantitative findings into educational content. Qualifications Technical Skills Proficiency in Python/R for quantitative analysis (NumPy, SciPy, QuantLib) and machine learning (TensorFlow, PyTorch). Expertise in statistical methods: time-series analysis, stochastic calculus, and hypothesis testing. Experience with SQL/NoSQL databases, cloud platforms (AWS, GCP), and blockchain data tools (Dune Analytics). Familiarity with algorithmic trading frameworks and backtesting tools (Backtrader, QuantConnect). Professional Competencies Analytical rigor to derive actionable insights from high-frequency and unstructured datasets. Ability to communicate complex quantitative concepts to cross-functional teams. Self-driven with adaptability to remote collaboration tools (GitHub, Slack). Preferred (Not Required) Academic projects involving quantitative finance, crypto market microstructure, or DeFi protocol analysis. Exposure to smart contract development (Solidity) or decentralized oracle systems (Chainlink). Pursuing or holding a degree in Quantitative Finance, Financial Engineering, Physics, or related fields. What We Offer Skill Development: Master cutting-edge tools like Pandas, Kafka, and blockchain analytics platforms. Real-World Impact: Contribute to strategies powering CryptoChakra’s platform, used by global users. Mentorship: Learn from quants, data scientists, and blockchain pioneers in a collaborative remote environment. Certification: Earn a Crypto Quantitative Analyst internships certificate . Show more Show less
Posted 2 months ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data,Proc, Google Cloud Storage, Composer SQL concepts, Presto SQL, Hive SQL, Python (Pandas, Numpy, SciPy, Matplotlib) and Pyspark Design and implement scalable data pipelines using Google Cloud Dataflow Develop and optimize BigQuery datasets for efficient data analysis and reporting Collaborate with cross-functional teams to integrate data solutions with business processes Automate data workflows using Cloud Composer and Apache Airflow Ensure data security and compliance with GCP Identity and Access Management Mentor junior engineers in best practices for cloud-based data engineering Implement real-time data processing with Google Cloud Pub/Sub and Dataflow Continuously evaluate and adopt new GCP tools and technologies Show more Show less
Posted 2 months ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Data Science Lead is a strategic professional who stays abreast of developments within own field and contributes to directional strategy by considering their application in own job and the business. Recognized technical authority for an area within the business. Requires basic commercial awareness. There are typically multiple people within the business that provide the same level of subject matter expertise. Developed communication and diplomacy skills are required in order to guide, influence and convince others, in particular colleagues in other areas and occasional external customers. Significant impact on the area through complex deliverables. Provides advice and counsel related to the technology or operations of the business. Work impacts an entire area, which eventually affects the overall performance and effectiveness of the sub-function/job family. Responsibilities: Conducts strategic data analysis, identifies insights and implications and make strategic recommendations, develops data displays that clearly communicate complex analysis. Mines and analyzes data from various banking platforms to drive optimization and improve data quality. Deliver analytics initiatives to address business problems with the ability to determine data required, assess time & effort required and establish a project plan. Consults with business clients to identify system functional specifications. Applies comprehensive understanding of how multiple areas collectively integrate to contribute towards achieving business goals. Consults with users and clients to solve complex system issues/problems through in-depth evaluation of business processes, systems and industry standards; recommends solutions. Leads system change process from requirements through implementation; provides user and operational support of application to business users. Formulates and defines systems scope and goals for complex projects through research and fact-finding combined with an understanding of applicable business systems and industry standards. Impacts the business directly by ensuring the quality of work provided by self and others; impacts own team and closely related work teams. Considers the business implications of the application of technology to the current business environment; identifies and communicates risks and impacts. Drives communication between business leaders and IT; exhibits sound and comprehensive communication and diplomacy skills to exchange complex information. Conduct workflow analysis, business process modeling; develop use cases, test plans, and business rules; assist in user acceptance testing. Collaborate on design and implementation of workflow solutions that provide long term scalability, reliability, and performance, and integration with reporting. Develop in-depth knowledge and proficiency of supported business areas and engage business partners in evaluating opportunities for process integration and refinement. Gather requirements and provide solutions across Business Sectors Partner with cross functional teams to analyze, deconstruct, and map current state process and identify improvement opportunities including creation of target operation models. Assist in negotiating for resources owned by other areas in order ensure required work is completed on schedule Develop and maintain documentation on an ongoing basis, and train new and existing users Direct the communication of status, issue, and risk disposition to all stakeholders, including Senior Management Direct the identification of risks which impact project delivery and ensure mitigation strategies are developed and executed when necessary Ensure that work flow business case / cost benefit analyses are in line with business objectives Deliver coherent and concise communications detailing the scope, progress and results of initiatives underway Develop strategies to reduce costs, manage risk, and enhance services Deploy influencing and matrix management skills in order to ensure technology solutions meet business requirements Performs other duties and functions as assigned. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: MBA or Advanced Degree Information Systems, Business Analysis / Computer Science 10+ years experience using tools for statistical modeling of large data sets Proficient in data handling suites PYTHON, Spark, R, HIVE, SAS, SQL, or similar packages Strong ability to extract strategic insights from large data sets Experience in Design and implementation of machine learning models and algorithms to solve complex business problems with exposure to machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn Adept at using Statistical (like forecasting/modeling, data analysis, regression/optimization models), Machine Learning (GBM, Decision Trees etc.) & AI techniques (Deep Learning) Strong aptitude, ability, motivation, and interest in placing quantitative analysis in the context of Finance/risk domain and/or business economics. Experience in working on Visualization tools like Tableau, POWER BI, etc. Skilled core python developer with extensive development expertise in building the high scaled and performant software platforms for data computation and processing Expert level knowledge of core python concepts and libraries such as pandas, NumPy and SciPy and well versed with OOPs concepts and design patterns. Strong computer science fundamentals in data structures, algorithms, databases, and operating systems. Highly experienced with Unix based operating systems and Hands-on experience in writing SQL queries. Experience with source code management tools such as Bitbucket, Git etc. Process Improvement or Project Management experience Education: Bachelor’s/University degree or equivalent experience, potentially Masters degree This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. 8+ years of experience in Analytics Industry/Deliveries Proficient in data handling suites PYTHON, Spark, R, HIVE, SAS, SQL, or similar packages Strong ability to extract strategic insights from large data sets Experience in Design and implementation of machine learning models and algorithms to solve complex business problems with exposure to machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn Adept at using Statistical (like forecasting/modeling, data analysis, regression/optimization models), Machine Learning (GBM, Decision Trees etc.) & AI techniques (Deep Learning) Strong aptitude, ability, motivation and interest in placing quantitative analysis in the context of Finance/risk domain and/or business economics. Experience in working on Visualization tools like Tableau, POWER BI, etc. Skilled core python developer with extensive development expertise in building the high scaled and performant software platforms for data computation and processing Expert level knowledge of core python concepts and libraries such as pandas, numpy and scipy and well versed with OOPs concepts and design patterns. Strong computer science fundamentals in data structures, algorithms, databases and operating systems. Highly experienced with Unix based operating systems and Hands-on experience in writing SQL queries. Experience with source code management tools such as Bitbucket, Git etc Good to have- Exposure to and understanding of cloud platforms like AWS, Azure, GCP(Google Cloud) Experience working with banking domain like pricing, risk etc is plus CFA/FRM certification is plus. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Science ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less
Posted 2 months ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements Design and build scalable data infrastructure with efficiency, reliability, and consistency to meet rapidly growing data needs Build the applications required for optimal extraction, cleaning, transformation, and loading data from disparate data sources and formats using the latest big data technologies Building ETL/ELT pipelines and work with other data infrastructure components, like Data Lakes, Data Warehouses and BI/reporting/analytics tools Work with various cloud services like AWS, GCP, Azure to implement highly available, horizontally scalable data processing and storage systems and automate manual processes and workflows Implement processes and systems to monitor data quality, to ensure data is always accurate, reliable, and available for the stakeholders and other business processes that depend on it Work closely with different business units and engineering teams to develop a long-term data platform architecture strategy and thus foster data-driven decision-making practices across the organization Help establish and maintain a high level of operational excellence in data engineering Evaluate, integrate, and build tools to accelerate Data Engineering, Data Science, Business Intelligence, Reporting, and Analytics as needed Focus on building test-driven development by writing unit/integration tests Contribute to design documents and engineering wiki You will enjoy this role if you... Like building elegant well-architected software products with enterprise customers Want to learn to leverage public cloud services & cutting-edge big data technologies, like Spark, Airflow, Hadoop, Snowflake, and Redshift Work collaboratively as part of a close-knit team of geeks, architects, and leads Desired Skills & Experience: 1+ years of data engineering or equivalent knowledge and ability 1+ years software engineering or equivalent knowledge and ability Strong proficiency in at least one of the following programming languages: Python, Scala, or Java Experience designing and maintaining at least one type of database (Object Store, Columnar, In-memory, Relational, Tabular, Key-Value Store, Triple-store, Tuple-store, Graph, and other related database types) Good understanding of star/snowflake schema designs Extensive experience working with big data technologies like Spark, Hadoop, Hive Experience building ETL/ELT pipelines and working on other data infrastructure components like BI/reporting/analytics tools Experience working with workflow orchestration tools like Apache Airflow, Oozie, Azkaban, NiFi, Airbyte, etc. Experience building production-grade data backup/restore strategies and disaster recovery solutions Hands-on experience with implementing batch and stream data processing applications using technologies like AWS DMS, Apache Flink, Apache Spark, AWS Kinesis, Kafka, etc. Knowledge of best practices in developing and deploying applications that are highly available and scalable Experience with or knowledge of Agile Software Development methodologies Excellent problem-solving and troubleshooting skills Process-oriented with excellent documentation skills Bonus points if you: Have hands-on experience using one or multiple cloud service providers like AWS, GCP, Azure and have worked with specific products like EMR, Glue, DataProc, DataBricks, DataStudio, etc Have hands-on experience working with either Redshift, Snowflake, BigQuery, Azure Synapse, or Athena and understand the inner workings of these cloud storage systems Have experience building DataLakes, scalable data warehouses, and DataMarts Have familiarity with tools like Jupyter Notebooks, Pandas, NumPy, SciPy, sci-kit learn, Seaborn, SparkML, etc. Have experience building and deploying Machine Learning models to production at scale Possess excellent cross-functional collaboration and communication skills Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented “get things done” culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At EVERSANA, we are proud to be certified as a Great Place to Work across the globe. We’re fueled by our vision to create a healthier world. How? Our global team of more than 7,000 employees is committed to creating and delivering next-generation commercialization services to the life sciences industry. We are grounded in our cultural beliefs and serve more than 650 clients ranging from innovative biotech start-ups to established pharmaceutical companies. Our products, services and solutions help bring innovative therapies to market and support the patients who depend on them. Our jobs, skills and talents are unique, but together we make an impact every day. Join us! Across our growing organization, we embrace diversity in backgrounds and experiences. Improving patient lives around the world is a priority, and we need people from all backgrounds and swaths of life to help build the future of the healthcare and the life sciences industry. We believe our people make all the difference in cultivating an inclusive culture that embraces our cultural beliefs. We are deliberate and self-reflective about the kind of team and culture we are building. We look for team members that are not only strong in their own aptitudes but also who care deeply about EVERSANA, our people, clients and most importantly, the patients we serve. We are EVERSANA. Job Description Learn to develop and apply data mining and machine learning techniques and models to extract analytic insights from healthcare and non-healthcare data sets Support the review, generation, and delivery of analytic products in support of project work, RFP responses and other business needs Assist in the development of visualizations and presentations for client deliverables Develop software solutions and applications using Python, SQL, and/or R Interface with EVERSANA analysts, data scientists, clinicians and program managers and with EVERSANA customers. Qualifications Education - BS/MS/PhD degree in Engineering, Computer Science, Mathematics, Bioinformatics, Physics Practical knowledge of statistics, statistical inference, and machine learning algorithms Works independently and is a self-starter MS PowerPoint presentation development skills Client interface and presentation skills Knowledge of one or more of the following programming and analysis tools: Python (pandas, scikit-learn, Numpy, Scipy, and database access packages) SQL / (MySQL) AWS (Redshift) Microsoft Excel R programming language Visualization tools Linux (Red Hat, CentOS); Python and software development in a Linux environment Additional Information OUR CULTURAL BELIEFS: Patient Minded I act with the patient’s best interest in mind. Client Delight I own every client experience and its impact on results. Take Action I am empowered and empower others to act now. Grow Talent I own my development and invest in the development of others. Win Together I passionately connect with anyone, anywhere, anytime to achieve results. Communication Matters I speak up to create transparent, thoughtful and timely dialogue. Embrace Diversity I create an environment of awareness and respect. Always Innovate I am bold and creative in everything I do. EVERSANA is committed to providing competitive salaries and benefits for all employees. The anticipated base salary range for this position is $47,000 to $75,000 and is not applicable to locations outside of the U.S. The base salary range represents the low and high end of the salary range for this position. Compensation will be determined based on relevant experience, other job-related qualifications/skills, and geographic location (to account for comparative cost of living). EVERSANA reserves the right to modify this base salary range at any time. Our team is aware of recent fraudulent job offers in the market, misrepresenting EVERSANA. Recruitment fraud is a sophisticated scam commonly perpetrated through online services using fake websites, unsolicited e-mails, or even text messages claiming to be a legitimate company. Some of these scams request personal information and even payment for training or job application fees. Please know EVERSANA would never require personal information nor payment of any kind during the employment process. We respect the personal rights of all candidates looking to explore careers at EVERSANA. From EVERSANA’s inception, Diversity, Equity & Inclusion have always been key to our success. We are an Equal Opportunity Employer, and our employees are people with different strengths, experiences, and backgrounds who share a passion for improving the lives of patients and leading innovation within the healthcare industry. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion, and many other parts of one’s identity. All of our employees’ points of view are key to our success, and inclusion is everyone's responsibility. Follow us on LinkedIn | Twitter Show more Show less
Posted 2 months ago
5.0 - 8.0 years
0 Lacs
Delhi, India
On-site
Job Description Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with 5- 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications Bachelor’s degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Good To Have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Equal Opportunity Employer Pentair is an Equal Opportunity Employer. With our expanding global presence, cross-cultural insight and competence are essential for our ongoing success. We believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day. Show more Show less
Posted 2 months ago
2.0 - 4.0 years
0 Lacs
Delhi, India
On-site
Job Description Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with 2-4 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications Bachelor’s degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Good To Have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Equal Opportunity Employer Pentair is an Equal Opportunity Employer. With our expanding global presence, cross-cultural insight and competence are essential for our ongoing success. We believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day. Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: Develop & Maintain Test Automation Frameworks: Design and implement test automation frameworks using Python, ensuring robustness, scalability, and maintainability. Test Tool Development: Build custom tools and utilities for enhancing the testing process, leveraging Python and other relevant technologies. SDET Expertise: Work closely with the development team to write efficient, reusable test code that integrates seamlessly into the CI/CD pipeline. Test Planning & Execution: Develop and execute comprehensive test plans, including functional, regression, performance, and integration tests. Collaboration: Work in close collaboration with development, product, and operations teams to understand software requirements and design appropriate testing strategies. Automation Optimization: Continuously improve existing test scripts, tools, and processes to increase efficiency, reduce maintenance, and optimize test execution time. Analytical & Problem-Solving: Use strong analytical skills to identify areas of improvement, troubleshoot issues, and provide solutions. Continuous Improvement: Contribute to process improvements, best practices, and the adoption of new technologies that enhance testing capabilities. Documentation: Maintain clear and thorough documentation of testing processes, tools, and frameworks. Metrics and Reporting: Generate reports on testing metrics, providing insights into the quality of the product and areas for improvement. Qualifications: 5+ years of experience in software testing, including significant experience as an SDET or in a similar role. Strong experience with Python: Hands-on experience developing and maintaining test automation frameworks, writing test scripts, and creating custom tools using Python. Proficient in Automation Tools: Experience with popular automation tools and frameworks like Selenium, PyTest, Robot Framework, or similar. Familiarity with Python Libraries: Strong experience with Python libraries such as Pandas, NumPy, Matplotlib, or SciPy for data manipulation, analysis, and visualization, especially for reporting and logging purposes in automated test cases. Strong Analytical Skills: Proven ability to analyze complex systems, diagnose issues, and identify root causes. Experience with CI/CD Pipelines: Familiarity with CI/CD tools such as Jenkins, GitHub, or similar. Knowledge of Testing Methodologies: In-depth understanding of testing methodologies, test levels, and test strategies (unit, integration, functional, performance, etc.). Database Testing & SQL: Experience with testing database-driven applications and writing SQL queries to validate data. Version Control: Familiarity with version control tools like Git. Problem-Solving Mindset: Strong troubleshooting and problem-solving abilities in complex systems and tools. Communication Skills: Strong verbal and written communication skills, with the ability to explain technical concepts clearly. Preferred Skills (Nice to Have): AI/ML Knowledge: Understanding of artificial intelligence (AI) and machine learning (ML) concepts, frameworks, and libraries like TensorFlow, PyTorch, or Scikit-learn. GenAI Expertise: Familiarity with Generative AI (GenAI) technologies, such as GPT or similar models, and their application in automating or enhancing testing and development processes. OCR Technology: Experience with Optical Character Recognition (OCR) technology, tools, and frameworks like Tesseract or Google Vision API, particularly in testing document processing or image-based data systems. Cloud Testing: Experience with testing in cloud-native environments (AWS, Azure, Google Cloud). Containerization: Familiarity with containerization technologies like Docker. API Testing: Experience with API testing tools (e.g., Postman, REST Assured). Performance Testing: Knowledge of performance testing tools like JMeter or LoadRunner. Agile Methodologies: Exposure to Agile methodologies and working in Agile environments. Test-Driven Development (TDD) or Behavior-Driven Development (BDD) experience. Show more Show less
Posted 2 months ago
1.0 - 5.0 years
8 - 12 Lacs
Mumbai
Work from Office
Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less
Posted 2 months ago
1.0 - 5.0 years
8 - 12 Lacs
Thane
Work from Office
Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less
Posted 2 months ago
1.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Role Data Scientist LocationBangalore TimingsFull Time (As per company timings) Notice Period(Immediate Joiner Only) Experience5 Years We are looking for a highly motivated and skilled Data Scientist to join our growing team The ideal candidate should possess a robust background in data science, machine learning, and statistical analysis, with a passion for uncovering insights from complex datasets This role demands hands-on experience in Python and various ML libraries, strong business acumen, and effective communication skills for translating data insights into strategic decisions. Key Responsibilities Develop, implement, and optimize machine learning models for predictive analytics and business decision-making. Work with both structured and unstructured data to extract valuable insights and patterns. Leverage Python and standard ML libraries (NumPy, Pandas, SciPy, Scikit-Learn, TensorFlow, PyTorch, Keras, Matplotlib) for data modeling and analysis. Design and build data pipelines for streamlined data processing and integration. Conduct Exploratory Data Analysis (EDA) to identify trends, anomalies, and business opportunities. Partner with cross-functional teams to embed data-driven strategies into core business operations. Create compelling data stories through visualization techniques to convey findings to non-technical stakeholders. Stay abreast of the latest ML/AI innovations and industry best practices. Required Skills & Qualifications 5 years of proven experience in Data Scientist and machine learning. Proficient in Python and key data science libraries. Experience with ML frameworks such as TensorFlow, Keras, or PyTorch. Strong understanding of SQL and relational databases. Solid grounding in statistical analysis, hypothesis testing, and feature engineering. Familiarity with data visualization tools like Matplotlib, Seaborn, or Plotly. Demonstrated ability to work with large datasets and solve complex analytical problems. Excellent communication and data storytelling skills. Knowledge of Marketing Mix Modeling is a plus. Preferred Skills Hands-on experience with cloud platforms like AWS, Azure, or GCP. Exposure to big data technologies such as Hadoop, Spark, or Databricks. Familiarity with NLP, computer vision, or deep learning. Understanding of A/B testing and experimental design methodologies. Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough