Jobs
Interviews

406 Plotly Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

1 - 3 Lacs

Cochin

On-site

Design, develop, and implement machine learning models, including deep learning, reinforcement learning, and predictive modelling. Leverage frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for AI/ML development. Work on NLP projects utilizing tools like NLTK, Langchain, and large language models (GPT and LLAMA). Perform data preprocessing, cleaning, and transformation for machine learning pipelines. Conduct data exploration and statistical analysis to derive actionable insights. Utilize tools like Power BI, Tableau, Matplotlib, Plotly, and Seaborn to create interactive dashboards and visualizations. Deliver engaging online and offline training sessions on AI, ML, Deep Learning, and related tools Create course materials, hands-on projects, quizzes, and assignments Guide students on capstone projects and assessments Job Type: Full-time Pay: ₹15,000.00 - ₹25,000.00 per month Education: Bachelor's (Preferred) Experience: total work: 2 years (Preferred) artificial intelligence: 2 years (Preferred) Work Location: In person

Posted 5 days ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 5 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 5 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-2 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 5 days ago

Apply

2.0 - 4.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Main Responsibilities Participating in the design and development of end-to-end data science analytics solutions using associated technologies Independent development of the problem solution steps and coding functionalities and its implementation by analysis of input and desired output Perform data preprocessing, harmonization, and feature engineering to prepare modeling datasets. Job Requirement Bachelor’s or Master’s degree in Computer Science, Data science, Data Analytics or a similar fields 2-4 years of working experience Advance knowledge in Python programming and experiences in data science using Python (familiar with the libraries Pandas, Scikit-learn, Streamlit, SpaCy, Gensim, NLTK, re, Plotly, Matplotlib, huggingface, transformer, openai, Langchain) Professional experience in the development of web applications such as WebApps and REST APIs with JavaScript frameoworks and with Python Experience with PowerApps and PowerAutomate Experience with data visualization techniques such as PowerBI etc. Experience in Machine Learning especially in the field of supervised machine learning and classification (create, train, apply classifier and evaluate results) Experience in regular expressions and natural language processing creating preprocessing pipelines) Familiar with and comfortable using the latest technological tools, including ChatGPT Working experience with LLMs (incorporate LLMs in WebApps to solve business tasks) and creating prompts for various tasks utilizing Langchain Primary location BASF Innovation campus at Turbhe, Navi Mumbai Job Information technology/Service

Posted 5 days ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Locations : Bengaluru | Gurgaon Who We Are Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. We Are BCG X We’re a diverse team of more than 3,000 tech experts united by a drive to make a difference. Working across industries and disciplines, we combine our experience and expertise to tackle the biggest challenges faced by society today. We go beyond what was once thought possible, creating new and innovative solutions to the world’s most complex problems. Leveraging BCG’s global network and partnerships with leading organizations, BCG X provides a stable ecosystem for talent to build game-changing businesses, products, and services from the ground up, all while growing their career. Together, we strive to create solutions that will positively impact the lives of millions. What You'll Do Are you passionate about developing socially and ethically responsible AI systems? Are you excited by the prospect of supporting projects that apply cutting-edge AI and GenAI models to solve real-world problems? Imagine working with the BCG Responsible AI team, where you can develop and apply innovative tools for testing and evaluating GenAI products. Your daily work would involve designing and implementing testing and evaluation frameworks to help improve product quality and ensure AI systems are safe, secure, and equitable. Join a team dedicated to exploring creative solutions and pioneering advancements in Responsible AI. Your work will influence the development of GenAI applications, focusing on transparency, accountability, and trustworthiness. On the team, you'll collaborate with brilliant minds and make a tangible impact on shaping the future of AI, ensuring it meets the highest standards for responsibility and effectiveness. You will have the opportunity to support a wide variety of projects, spanning use cases, technologies, industries, and clients. Embrace the challenge of working at the intersection of cutting-edge technology, ethical innovation, and real-world applications. The Responsible AI Applied Scientist plays a critical role in the BCG Responsible AI team’s safety testing and red teaming efforts, working in conjunction with our AI and GenAI product teams to ensure alignment with our Responsible AI policy, principles, and standards, and to support the design, continuous improvement, and execution of the overall Responsible AI program at BCG. The Responsible AI Applied Scientist will work with a diverse set of stakeholders to: Develop tools and techniques to scale and accelerate AI risk assessment and measurement across BCG and our clients Collaborate with product teams to influence quality and risk measurement and mitigations in AI/GenAI products through manual efforts and scalable, quantitative approaches Research new and emerging threats and measurement/evaluation approaches to ensure our approaches stay on the cutting-edge Work with small technical teams executing risk assessment and measurement on AI/GenAI products Train and mentor technical practitioners on measurements and evaluation approaches for GenAI products Remain up to date on emerging frameworks, standards, and technical approaches, and related issues by participating in workshops, reading professional publications, maintaining personal networks, and participating in professional organizations What You'll Bring Ideal candidates will have a PhD in the social sciences field and at least 3 years of professional data science or quantitative research experience; prior professional services experience is a plus Ability to think critically about ethical, social, and business risks posed by AI systems Passion for building things and comfort working with modern data science development tools Experience designing and analyzing experiments using advanced statistical methods Experience and enthusiasm for working with Generative AI technologies Broad conceptual understanding of ML and AI paradigms (e.g., tree-based and gradient-boosted models, deep learning) Strong time management and organizational skills with the ability to prioritize and execute projects independently Team player mindset with an ability to work on diverse, cross-functional teams Strong written, verbal, and visual communication skills Ability to explain sophisticated data science concepts to non-technical audiences and translate analytical results into business implications #BCGXjob Technical Skills Must Have Experience Design and analysis of experiments Statistical modeling, including hierarchical linear models GenAI, including prompt engineering and programmatically interacting with foundation models through APIs, e.g., OpenAI, Anthropic, HuggingFace Python and the open-source data science ecosystem (e.g., Jupyter, pandas, scikit-learn, statsmodels, plotly, etc.) Version control with git Nice To Have Experience Experience with quantitative social science research. Familiarity with software engineering practices (e.g., unit testing, CI/CD). Exposure to cloud platforms (AWS, Azure, GCP) or SQL databases. Knowledge of AI risk management frameworks (e.g., NIST AI RMF) Developer tools including IDEs (e.g., VSCode, Pycharm), environment management (e.g., pyenv, conda, poetry, docker) Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Frontend & Deployment Engineer – User Interface and Edge Expert 1.⁠ ⁠Role Objective: Responsible for creating intuitive user interfaces and managing edge deployments for AI applications. Must excel in developing user-friendly products that work seamlessly across various naval environments and devices. The engineer will be expected to design and build intuitive, responsive, and secure interfaces for naval AI applications. Package and deploy applications on warship consoles, edge devices, or Naval desktops/ tablets. Ensure usability under network-constrained, offline, or rugged field environments. 2.⁠ ⁠Key Responsibilities: 2.1. Develop UI interfaces for dashboards, alerts, geo-spatial visualization, analytics. 2.2. Integrate with AI outputs from backend (text, audio, image, event streams). 2.3. Build Electron-based desktop apps and Flutter-based mobile clients for offline use. 2.4. Ensure local caching, secure data handling, and session management. 2.5. Implement UI analytics to track user interaction and usage heatmaps (if permitted). 2.6. Work with end users (officers/sailors/operators) for field testing and UX feedback. 2.7. Ensure compliance with design constraints: minimal memory footprint, low power, rapid boot. 3.⁠ ⁠Educational Qualifications Essential Requirements: 3.1. B.Tech/M.Tech in Computer Science, Information Technology, or related field. 3.2. Strong foundation in user interface design and human-computer interaction. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Desired Professional Certifications: 3.4. Google UX Design Professional Certificate. 3.5. Adobe Certified Expert in UX Design. 3.6. Human-Computer Interaction certification. 3.7. Mobile app development certifications (iOS/Android) Core Skills & Tools: 4.⁠ ⁠Frontend Development: 4.1. Languages: JavaScript (ES6+), TypeScript, HTML5, CSS3, WebAssembly 4.2. Frameworks: React, Vue.js, Angular, Svelte, Next.js, Nuxt.js 4.3. Mobile Development: React Native, Flutter, Swift (iOS), Kotlin (Android) 4.4. UI Libraries: Material-UI, Ant Design, Chakra UI, Tailwind CSS 4.5. State Management: Redux, MobX, Vuex, Context API, Zustand 5.⁠ ⁠Visualization & Interactive Interfaces: 5.1. Data Visualization: D3.js, Chart.js, Plotly, Observable, Three.js 5.2. Maps & Geospatial: Leaflet, Mapbox, OpenLayers, Google Maps API 5.3. Real-time Updates: WebSocket, Server-Sent Events, Socket.io 5.4. Progressive Web Apps: Service Workers, offline capabilities, push notifications. 6.⁠ ⁠Edge Deployment: 6.1. Edge Computing: NVIDIA Jetson, Intel NUC, Raspberry Pi 6.2. Container Edge: K3s, MicroK8s, Docker on edge devices 6.3. Mobile Deployment: iOS App Store, Google Play Store deployment 6.4. Desktop Applications: Electron, Tauri, Progressive Web Apps 6.5. Embedded Systems: Basic embedded programming, hardware interfacing 7.⁠ ⁠Experience Requirements 7.1. Production experience with modern frontend frameworks 7.2. Mobile app development and deployment experience 7.3. Understanding of responsive design and accessibility standards 7.4. Experience with version control and collaborative development 7.5. Led frontend architecture decisions for complex applications 7.6. Experience with micro-frontend architectures 7.7. Performance optimization for web and mobile applications 7.8. Experience mentoring junior developers and conducting code reviews 8.⁠ ⁠Cross-Compatibility Requirements: 8.1. Consume APIs exposed by the backend with proper schema alignment. 8.2. Handle AI outputs (JSON, image, video) from inference modules designed by AI Engineer. 8.3. Provide test UI hooks to backend team for automation/integration testing. 8.4. Work with AI Engineer to display inference confidence, alerts.

Posted 6 days ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description - Jr. Data Scientist Experience: 0-1 year | Employment Type: Full-time Overview We are looking for a motivated Data Scientist with foundational data science expertise. This position is ideal for recent graduates or early-career professionals eager to work with real-world data, applying both standard and advanced preprocessing and modeling techniques in a collaborative environment. PLEASE NOTE: Mandatory: Email your CV to careers@solvusai.com with the subject line: “Job ID 202507-DS01: ” This is a full-time role with a hybrid work model Students currently pursuing a degree should not apply A strong foundation and clear understanding of data science concepts is essential Key Responsibilities 1. Data Ingestion & Preparation Extract and manipulate data using SQL and Python (pandas) Import and clean both structured & unstructured datasets 2. Data Preprocessing & Feature Engineering Handle missing values, outliers, and duplicates using statistical and ML techniques Apply noise reduction, data integration, and transformation (e.g., scaling, encoding) Perform dimensionality reduction (e.g., PCA) and ensure quality through data validation 3. Model Building & Evaluation Develop machine learning models (e.g., Linear Regression, Random Forest, XGBoost, ARIMA, LSTM) for varied problem types Tune hyperparameters using cross-validation and assess models using standard metrics (accuracy, RMSE, F1-score, etc.) 4. Visualization & Insight Generation Build dashboards and visualizations using tools like Matplotlib, Seaborn, Plotly, or Tableau Conduct statistical analysis to derive actionable business insights and present findings clearly 5. Team Collaboration Work closely with cross-functional teams (data engineers, analysts, business units) to align deliverables with organizational goals Participate in agile discussions and contribute to iterative development Required Skills Bachelors or Masters in Data Science, Computer Science, Statistics, or related field Proficiency in Python or R Strong SQL skills for data extraction/manipulation from relational databases Experience handling CSV/Excel data ingestion; advanced data cleaning techniques Understanding and implementation of various machine learning models (e.g., Linear Regression, Decision Tree, Random Forest, XGBoost, ARIMA, LSTM, SVM, K-Means) and their practical applications Ability to evaluate model performance using appropriate metrics: accuracy, precision, recall, F1-score, RMSE, MAE, ROC-AUC, etc Experience with feature scaling and categorical variable encoding Data visualization using Matplotlib, Seaborn, Plotly, or Tableau Analytical thinking, problem-solving, teamwork, and clear communication Ready to make an impact by transforming data into meaningful insights? Apply now!

Posted 6 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: This internship offers a unique opportunity to contribute to the development of cutting-edge tools in aviation safety and data analysis, leveraging the power of Machine Learning. Mission for this internship is to: Develop an interactive tool for mapping of Flight data recorders’ parameters based on Systems and subsystems. This will require the selected candidate to: Become familiarized with the system-wide functionality of Airbus aircraft Codify the potentially-affected parameters for a given incident to map the fault propagation tree Conduct research on, and ultimately implemented, the most appropriate Machine Learning algorithm(s) to be leveraged for this particular time-series application Build data visualization and user-interface tools By the end of this internship, you'll have delivered: A robust tool which will provide an exhaustive list of FDR parameters along with a practical representation of how and what all systems are impacted w.r.t a particular fault. A first-order proof of concept of the applicability of Machine Learning techniques in this space as demonstrated by a limited-scope use case Required Skills Strong programming skills in Python. Knowledge of key machine learning libraries (Scikit-learn, Tensorflow) and scientific computing libraries (e.g., SciPy) is an asset. Experience with Machine Learning (ML) for time-series applications, including exposure to supervised and unsupervised learning algorithms, an understanding of data preprocessing, feature engineering, and data visualization methods.) Experience with data visualization libraries (e.g., Matplotlib, Plotly, D3.js) Familiarity with UI/UX design principles for intuitive interfaces would be a plus This internship is ideal for students pursuing degrees in Computer Science, Aerospace Engineering, Data Science, or a related field, with a keen interest in aviation data, artificial intelligence, and software development. This job requires an awareness of any potential compliance risks and a commitment to act with integrity, as the foundation for the Company’s success, reputation and sustainable growth. Company: Airbus India Private Limited Employment Type: Internship------- Experience Level: Student Job Family: Testing By submitting your CV or application you are consenting to Airbus using and storing information about you for monitoring purposes relating to your application or future employment. This information will only be used by Airbus.Airbus is committed to achieving workforce diversity and creating an inclusive working environment. We welcome all applications irrespective of social and cultural background, age, gender, disability, sexual orientation or religious belief. Airbus is, and always has been, committed to equal opportunities for all. As such, we will never ask for any type of monetary exchange in the frame of a recruitment process. Any impersonation of Airbus to do so should be reported to emsom@airbus.com . At Airbus, we support you to work, connect and collaborate more easily and flexibly. Wherever possible, we foster flexible working arrangements to stimulate innovative thinking.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Applied AI Engineer – LLMs, LangChain, Agentic Systems 📍 Location: Kochi (first 6 months) → Bangalore (long-term) 🕑 Experience: 2–3 years (must be hands-on) 📡 Mode: Full-time, on-site (hybrid possible after Kochi phase) 🧭 Reports To: AI Innovation Leader About the Role: We’re hiring a hands-on Applied AI Engineer to join a pioneering AI Innovation initiative at Geojit Financial Services, one of India’s most respected and long-standing financial services firms. As part of Geojit’s Vision 2030 strategy, we’re embedding AI across all layers of the business — from transforming customer experience to automating intelligence in operations, wealth, and capital market platforms. This role offers the unique opportunity to work directly with leadership, build real-world AI systems, and be part of the core team shaping the future of intelligent finance in India. What You’ll Work On: As part of the AI Innovation Center of Excellence (CoE), your responsibilities include: 🚀 AI Solution Development Build and deploy LLM-powered agents for real business use cases across customer service, research automation, internal knowledge retrieval, and document intelligence Design agentic AI workflows using LangChain — integrating memory, tools, retrievers, custom functions, and chaining logic Evaluate and integrate Text-to-SQL models that translate natural queries into live database queries Leverage AWS Bedrock and SageMaker pipelines for experimentation, deployment, and orchestration 📊 Data Engineering & EDA: Work with structured and semi-structured data (CSV, SQL, JSON, APIs) Perform Exploratory Data Analysis (EDA) using Plotly, Dash, or Streamlit for internal tools and decision support Assist in creating reusable tools, dashboards, or data layers to support ongoing experimentation 🛠️ Architecture & MLOps :Collaborate with the AI Lead and Infra teams to design scalable MLOps setup s Create or contribute to prompt optimization frameworks, RAG pipelines, evaluation frameworks, and agent monitoring tool s Integrate APIs, vector stores, document retrievers, and internal knowledge base s Required Skills :✅ 2–3 years hands-on experience with :LLMs and GenAI APIs – OpenAI, Cohere, Anthropic, etc . LangChain (must-have) – agents, tools, memory, multi-hop chains, and integration experienc e Agentic AI – experience designing task-based autonomous or semi-autonomous workflow s Text-to-SQL models – understanding of evaluation techniques and production concern s AWS ecosystem – especially SageMaker, Bedrock, Lambda, API Gatewa y Data wrangling & visualization – Pandas, SQL, Plotly/Dash/Streamli t Nice to have :Fintech domain exposur e Participation in AI/ML hackathons or open-source contribution s Familiarity with vector search, embeddings (FAISS, Weaviate, etc. ) Frontend integration knowledge for internal tools (basic Streamlit/Dash/Flask ) About the Project & Culture : You’ll be working in a startup-style team embedded inside a legacy enterprise—delivering quick iterations, visible impact, and cross-functional collaboration with leadership. The goal is to show business value in less than 100 days, and scale AI across :Capital Market s Wealth & Investment Advisor y Mutual Funds & Insurance Distributio n Internal Productivity System s About Geojit :Geojit Financial Services Ltd is a pioneer in India’s capital markets with a 38-year legacy, 1M+ clients, and presence across 500+ offices. With backing from BNP Paribas and KSIDC, Geojit has been a leader in digital innovation :India’s first online trading platform (2000 ) Early movers in mobile trading, Smartfolios, Funds Genie, and Portfolio Managemen t Now entering a new AI-led chapter under the leadership of Jayakrishnan Sasidharan (ex-Adobe, Wipro, TCS ) They are committed to delivering personalized, intelligent, and frictionless experiences to millions of investors — powered by AI, data, and design . Why Join Us ?Work on real GenAI deployments with measurable RO I Direct access to decision-makers, not just product manager s Be part of India’s next Fintech transformation stor y Hands-on exposure to enterprise-scale AI architectur e Build something that millions will use — not just a lab prototyp e 📩 How to Appl ySend your resume via WhatsApp (preferred) or email :📱 WhatsApp: +91 900800002 0📧 Email: sureshaisales@gmail.co m📝 Subject: “Applied AI Role – Geojit ”

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

This internship offers you a unique opportunity to contribute to the development of cutting-edge tools in aviation safety and data analysis, leveraging the power of Machine Learning. Your mission during this internship is to develop an interactive tool for mapping Flight Data Recorders (FDR) parameters based on Systems and subsystems. This will require you to become familiarized with the system-wide functionality of Airbus aircraft, codify potentially-affected parameters for a given incident to map the fault propagation tree, conduct research on, and ultimately implement, the most appropriate Machine Learning algorithm(s) for this time-series application, and build data visualization and user-interface tools. By the end of this internship, you will have delivered a robust tool that provides an exhaustive list of FDR parameters along with a practical representation of the impact on systems related to a particular fault. Additionally, you will have created a first-order proof of concept demonstrating the applicability of Machine Learning techniques in this space through a limited-scope use case. Required Skills: - Strong programming skills in Python. Knowledge of key machine learning libraries (Scikit-learn, Tensorflow) and scientific computing libraries (e.g., SciPy) is an asset. - Experience with Machine Learning (ML) for time-series applications, including exposure to supervised and unsupervised learning algorithms, an understanding of data preprocessing, feature engineering, and data visualization methods. - Experience with data visualization libraries (e.g., Matplotlib, Plotly, D3.js). - Familiarity with UI/UX design principles for intuitive interfaces would be a plus. This internship is ideal for students pursuing degrees in Computer Science, Aerospace Engineering, Data Science, or related fields, with a keen interest in aviation data, artificial intelligence, and software development. This job requires an awareness of potential compliance risks and a commitment to act with integrity as the foundation for the Company's success, reputation, and sustainable growth. Company: Airbus India Private Limited Employment Type: Internship Experience Level: Student Job Family: Testing By submitting your CV or application, you are consenting to Airbus using and storing information about you for monitoring purposes related to your application or future employment. This information will only be used by Airbus. Airbus is committed to equal opportunities for all and will never ask for any monetary exchange during the recruitment process. Any impersonation of Airbus for such requests should be reported to emsom@airbus.com. At Airbus, we support you to work, connect, and collaborate more easily and flexibly. We foster flexible working arrangements wherever possible to stimulate innovative thinking.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer -R&D-Multi Omics What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for development and maintenance of software in support of target/biomarker discovery at Amgen. Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The role requires proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc), and some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc). Basic Qualifications: Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience. Preferred Qualifications: 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms. Functional Skills: Must-Have Skills: Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: Experience the git, CICD and the software development lifecycle Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricks Experience with cloud computing platforms and infrastructure (AWS preferred) Experience using and adopting Agile Framework A passion for tackling complex challenges in drug discovery with technology and data Basic understanding of data modeling, data warehousing, and data integration concepts Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills High degree of initiative and self-motivation. Demonstrated presentation skills Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: AI/ML Engineer – Industrial Applications Location: Kochi Company: Goose Industrial Solutions Pvt. Ltd. Experience: 1–3 years Job Type: Full-Time About the Role: We are looking for a passionate and hands-on AI/ML Engineer to join our Industrial AI Application Hub in Kochi. The ideal candidate will work at the intersection of automation, machine learning, and industrial data to build real-world use cases and AI-powered products for the manufacturing, food & beverage, and process industries. 🛠️ Key Responsibilities: Develop, train, and deploy ML models using industrial data (SCADA, sensor logs, PLC time series, etc.) Collaborate with automation and instrumentation teams to understand plant processes and design relevant use cases Create predictive maintenance models (for pumps, valves, drives, etc.) Work on computer vision use cases (e.g., product inspection, leakage detection) Design and develop dashboards and integrate ML insights into Goose's automation systems Conduct PoCs and pilot projects at customer sites Document and present use cases and findings internally and externally 🧩 Key Skills Required: Strong knowledge of Python , NumPy , Pandas , Scikit-learn , TensorFlow/PyTorch Experience in time-series analysis , anomaly detection , or predictive modeling Familiarity with OpenCV or other computer vision tools Understanding of edge computing and cloud integration (optional but preferred) Basic exposure to industrial protocols (Modbus, OPC UA, MQTT) is a plus Experience with SQL , NoSQL , and data visualization tools like Plotly, Dash, or Power BI 🎓 Qualifications: B.E./B.Tech or M.Tech in Computer Science, Electronics, Electrical, or related fields 1–3 years of relevant industry or academic experience in AI/ML Industrial/IoT/SCADA data experience will be an added advantage

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Location : Noida Job Role : Sr. AI/ML/ Data Scientist Overview We are seeking a highly skilled and experienced AI/ML Expert to spearhead the design, development, and deployment of advanced artificial intelligence and machine learning models. This role requires a strategic thinker with a deep understanding of ML algorithms, model optimization, and production-level AI systems. You will guide cross-functional teams, mentor junior data scientists, and help shape the AI roadmap to drive innovation and business impact. This role involves statistical analysis, data modeling, and interpreting large sets of data. An AI/ML expert with experience in creating AI models for data intelligence companies who specializes in developing and deploying artificial intelligence and machine learning solutions tailored for data-driven businesses. This expert should possess a strong background in data analysis, statistics, programming, and machine learning algorithms, enabling them to design innovative AI models that can extract valuable insights and patterns from vast amounts of data. About The Role The design and development of a cutting-edge application powered by large language models (LLMs). This tool will provide market analysis and generate high-quality, data-driven periodic insights. You will play a critical role in building a scalable and intelligent system that integrates structured data, NLP capabilities, and domain-specific knowledge to produce analyst-grade content. Key Responsibilities Design and develop LLM-based systems for automated market analysis. Build data pipelines to ingest, clean, and structure data from multiple sources (e.g., market feeds, news articles, technical reports, internal datasets). Fine-tune or prompt-engineer LLMs (e.g., GPT-4.5, Llama, Mistral) to generate concise, insightful reports. Collaborate closely with domain experts to integrate industry-specific context and validation into model outputs. Implement robust evaluation metrics and monitoring systems to ensure quality, relevance, and accuracy of generated insights. Develop and maintain APIs and/or user interfaces to enable analysts or clients to interact with the LLM system. Stay up to date with advancements in the GenAI ecosystem and recommend relevant improvements or integrations. Participate in code reviews, experimentation pipelines, and collaborative research Required : Strong fundamentals in machine learning, deep learning, and natural language processing (NLP). Proficiency in Python, with hands-on experience using libraries such as NumPy, Pandas, and Matplotlib/Seaborn for data analysis and visualization. Experience developing applications using LLMs (both closedand open-source models). Familiarity with frameworks like Hugging Face Transformers, LangChain, LlamaIndex, etc. Experience building ML models (e.g., Random Forest, XGBoost, LightGBM, SVMs), along with familiarity in training and validating models. Practical understanding of deep learning frameworks: TensorFlow or PyTorch. Knowledge of prompt engineering, Retrieval-Augmented Generation (RAG), and LLM evaluation strategies. Experience working with REST APIs, data ingestion pipelines, and automation workflows. Strong analytical thinking, problem-solving skills, and the ability to convert complex technical work into business-relevant insights. Preferred Familiarity with the chemical or energy industry, or prior experience in market research/analyst workflows. Exposure to frameworks such as OpenAI Agentic SDK, CrewAI, AutoGen, SmolAgent, etc. Experience deploying ML/LLM solutions to production environments (Docker, CI/CD). Hands-on experience with vector databases such as FAISS, Weaviate, Pinecone, or ChromaDB. Experience with dashboarding tools and visualization libraries (e.g., Streamlit, Plotly, Dash, or Tableau). Exposure to cloud platforms (AWS, GCP, or Azure), including usage of GPU instances and model hosting services. About ChemAnalyst ChemAnalyst is a digital platform, which keeps a real-time eye on the chemicals and petrochemicals market fluctuations, thus, enabling its customers to make wise business decisions. With over 450 chemical products traded globally, we bring detailed market information and pricing data at your fingertips. Our real-time pricing and commentary updates enable users to stay acquainted with new commercial opportunities. Each day, we flash the major happenings around the globe in our news section. Our market analysis section takes it a step further, offering an in-depth evaluation of over 15 parameters including capacity, production, supply, demand gap, company share and among others. Our team of experts analyse the factors influencing the market and forecast the market data for up to the next 10 years. We are a trusted source of information for our international clients, ensuring user-friendly and customized deliveries on time. (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

This internship offers a unique opportunity to contribute to the development of cutting-edge tools in aviation safety and data analysis, leveraging the power of Machine Learning. Your mission for this internship is to develop an interactive tool for mapping Flight Data Recorders parameters based on Systems and subsystems. To excel in this internship, you will need to become familiarized with the system-wide functionality of Airbus aircraft. Your tasks will include codifying the potentially-affected parameters for a given incident to map the fault propagation tree, conducting research on, and ultimately implementing, the most appropriate Machine Learning algorithm(s) for this particular time-series application, as well as building data visualization and user-interface tools. By the end of this internship, you will have delivered a robust tool that provides an exhaustive list of FDR parameters along with a practical representation of how and what all systems are impacted with respect to a particular fault. Additionally, you will have provided a first-order proof of concept of the applicability of Machine Learning techniques in this space as demonstrated by a limited-scope use case. Required Skills: - Strong programming skills in Python. Knowledge of key machine learning libraries (Scikit-learn, Tensorflow) and scientific computing libraries (e.g., SciPy) is an asset. - Experience with Machine Learning (ML) for time-series applications, including exposure to supervised and unsupervised learning algorithms, an understanding of data preprocessing, feature engineering, and data visualization methods. - Experience with data visualization libraries (e.g., Matplotlib, Plotly, D3.js). - Familiarity with UI/UX design principles for intuitive interfaces would be a plus.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

The role requires you to create intuitive user interfaces and manage edge deployments for AI applications, ensuring seamless functionality across various naval environments and devices. Your responsibilities will include designing and building responsive and secure interfaces, packaging and deploying applications on different platforms, and ensuring usability in network-constrained or offline environments. You will be developing UI interfaces for dashboards, alerts, geo-spatial visualization, and analytics, integrating with AI outputs from the backend, and building Electron-based desktop apps and Flutter-based mobile clients for offline use. Additionally, you will need to ensure local caching, secure data handling, session management, and implement UI analytics for user interaction tracking. Working closely with end users for field testing and feedback, you will be responsible for compliance with design constraints such as minimal memory footprint, low power consumption, and rapid boot. The ideal candidate should have a B.Tech/M.Tech in Computer Science or related field with a strong foundation in user interface design and human-computer interaction. Desired certifications include Google UX Design Professional Certificate, Adobe Certified Expert in UX Design, and Mobile app development certifications. Core skills and tools required for this role include proficiency in JavaScript, TypeScript, HTML5, CSS3, React, Vue.js, Angular, React Native, Flutter, D3.js, Leaflet, WebSocket, Docker, and Electron. Experience with modern frontend frameworks, mobile app development, responsive design, version control, and collaborative development is essential. Strong communication skills, the ability to mentor junior developers, and experience with micro-frontend architectures are also desired qualifications. If you are interested in this position, please apply by sending your resume to nk@bluparrot.in and JK@Bluparrot.in.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034

Posted 1 week ago

Apply

170.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Our Data Scientists use their domain expertise to identify where AI can speed up operations, reduce costs, increase productivity, personalise the client experience; and build and deploy AI models. We’re looking for brilliant minds who are intellectually curious, analytical, and willing to learn, with a numerate background in AI, mathematics, engineering, sciences, or related disciplines. We’re looking for brilliant minds who are intellectually curious, analytical and love to learn. About Our Architecture, Technology Solutions And App Development Team Want to be part of a team that focuses on what we can do instead of what’s already been done? This is your opportunity. From Blockchain to AWS, deep learning, and AI, you will work with the tools and platforms that you need to be successful. You’ll have the freedom to judge the different ways you want to grow – whether that’s by gaining diverse skills on a new project or deepening your expertise on innovative tech. Join us and you’ll find diverse paths, projects and people that inspire you to think outside-the-box and change the way people everywhere experience our products and services. About Our Technology And Operations Team Our Technology & Operations (T&O) team is the powerhouse for the Bank. We aim to go further, faster, to ensure we are agile and ready for tomorrow, today. Our diverse network enables us to innovate and build banking solutions that support communities to prosper. We are a place where talented people are encouraged to grow, learn, and thrive, to drive their own career journeys, to reach their full potential. When you work with us, you’re protecting the reputation and legacy of a 170-year organisation and building on it. We are driven by progress and continuously evolving to ensure we agile and ready for tomorrow, today. Key Responsibilities Partner with stakeholders including our sales team, relationship managers, traders, and supervision to understand where AI can optimise our business. Define AI business requirements with tangible, measurable and commercial success criteria. Mine large sets of data to uncover trends, insights, and opportunities from multiple internal and external data sources. Enhance data collection methods through augmentation, normalisation, and outlier removal. Recommend appropriate AI solutions and train AI models. Deliver self-service dashboards to report on business use case and success criteria. Use data visualisation for predicted outputs and insights and to translate complete analytics into business language. Embed a framework of continuous monitoring to deployed models and solutions, for continuous enhancement. Skills And Experience Predictive models, machine learning algorithms and large language models. Data visualisation tools, including Matplotlib, Plotly, Power BI and Tableau. Natural Language Processing (NLP). Python programming. Financial mathematics. Time series analysis. Data governance and management. AI regulatory compliance. AI techniques including quantum machine learning. Qualifications Experience and education in AI, Mathematics, Engineering, the Sciences, or related discipline. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.

Posted 1 week ago

Apply

6.0 years

5 - 15 Lacs

India

On-site

Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities • Build Gen AI-enabled solutions using online and offline LLMs, SLMs and TLMs tailored to domain-specific problems. • Deploy agentic AI workflows and use cases using frameworks like LangGraph, Crew AI etc. • Apply NLP, predictive modelling and optimization techniques to develop scalable machine learning solutions. • Integrate enterprise knowledge bases using Vector Databases and Retrieval Augmented Generation (RAG). • Apply advanced analytics to address complex challenges in Healthcare, BFSI and Manufacturing domains. • Deliver embedded analytics within business systems to drive real-time operational insights. Required Skills & Experience • 3–5 years of experience in applied Data Science or AI roles. • Experience working in any one of the following domains: BFSI, Healthcare/Health Sciences, Manufacturing or Utilities. • Proficiency in Python, with hands-on experience in libraries such as scikit-learn, TensorFlow • Practical experience with Gen AI (LLMs, RAG, vector databases), NLP and building scalable ML solutions. • Experience with time series forecasting, A/B testing, Bayesian methods and hypothesis testing. • Strong skills in working with structured and unstructured data, including advanced feature engineering. • Familiarity with analytics maturity models and the development of Analytics Centre of Excellence (CoE’s). • Exposure to cloud-based ML platforms like Azure ML, AWS SageMaker or Google Vertex AI. • Data visualization using Matplotlib, Seaborn, Plotly; experience with Power BI is a plus. What We Look for (Values & Behaviours) • AI-First Thinking – Passion for leveraging AI to solve business problems. • Data-Driven Mindset – Ability to extract meaningful insights from complex data. • Collaboration & Agility – Comfortable working in cross-functional teams with a fast-paced mindset. • Problem-Solving – Think beyond the obvious to unlock AI-driven opportunities. • Business Impact – Focus on measurable outcomes and real-world adoption of AI. • Continuous Learning – Stay updated with the latest AI trends, research and best practices.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: • Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies • Driving the technical discussions with clients along with Project Managers. • Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team • Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. • Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. • Mentoring and coaching junior Python/AI/ML engineers. • Sharing knowledge through knowledge-sharing technical presentations. • Implement new Python/AI features with high quality coding standards. Must-To Have: • B.Tech/B.E. in computer science, IT, Data Science, ML or related field. • Strong proficiency in Python programming language. • Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. • Proficient in Debugging and Exception Handling • Professional experience in developing and operating AI systems in production. • Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) • Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. • Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. • Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. • Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). • Experience in working in Agile methodology Good-To Have: • A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. • Python framework (Django/Flask/Fast API) & API integration. • AI/ML/DL/MLOops certification done by AWS. • Experience with OpenAI API. • Good in Japanese Language

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies