Home
Jobs
Companies
Resume

2243 Numpy Jobs - Page 47

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

India

Remote

Linkedin logo

Halo believes in innovation by inclusion to solve digital problems . As an international agency of over 200 people specializing in interactive media strategy and development, we embrace equity and empowerment in a serious way. Our interdisciplinary teams of unique designers, developers and entrepreneurial minds with a variety of backgrounds, viewpoints, and skills connect to solve business challenges of every shape and size. We empathize to form deep, meaningful relationships with our clients so they can do the same with their audience. Working at Halo feels like belonging . Learn more about our philosophy, benefits, and team at https://halopowered.com/As an AI Architect, you will lead the design of scalable, secure, and modern technology solutions, leveraging artificial intelligence, cloud platforms, and microservices—while ensuring alignment with AI governance principles, agile delivery, and platform modernization strategies As a Data Scientist, you'll be part of a multidisciplinary team applying advanced analytics, machine learning, and generative AI to solve real-world problems across our consulting, health, wealth, and career businesses. You will collaborate closely with engineering, product, and business stakeholders to develop scalable models, design intelligent pipelines, and influence data-driven decision-making across the enterprise. Requirements Design, develop, and deploy robust machine learning models and data pipelines that support AI-enabled applications Apply exploratory data analysis (EDA) and feature engineering techniques to extract insights and improve model performance Collaborate with cross-functional teams to translate business problems into analytical use cases Contribute to the full machine learning lifecycle: from data preparation and model experimentation to deployment and monitoring Work with structured and unstructured data, including text, to develop NLP and generative AI solutions Define and enforce best practices in model validation, reproducibility, documentation, and versioning Partner with engineering to integrate models into production systems using CI/CD pipelines and cloud-native services Stay current with industry trends, emerging techniques (e.g., RAG, LLMs, embeddings), and relevant tools Required Skills & Qualifications 3+ years of experience in Data Science, Machine Learning, or Applied AI roles Proficiency in Python (preferred) and a strong grasp of pandas, NumPy, and scikit-learn Skilled in data querying, manipulation, and pipeline development using SQL and modern ETL frameworks Experience working with Databricks, including notebooks, MLflow, Delta Lake, and job orchestration Experience with Git-based workflows and Agile methodologies Strong analytical thinking, problem-solving skills, and communication abilities Exposure to Generative AI, LLMs, prompt engineering, or vector-based search Hands-on experience with cloud platforms (AWS, Azure, or GCP) and deploying models in scalable environments Knowledge of data versioning, model registry, and ML lifecycle tools (e.g., MLflow, DVC, SageMaker, DataBricks, or Vertex AI) Experience working with visualization tools like Tableau, Power BI, or Qlik Degree in Computer Science, Data Science, Applied Mathematics, or a related field Benefits 100% RemoteWork Salary in USD Get to work on challenging projects for the U.S Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Pune

Work from Office

Naukri logo

Tool Development: o Design, develop, and test tool prototypes to meet project specifications. o Write and maintain high-quality code in Python, CAPL, Java, and Typescript. o Implement and integrate frameworks and middleware such as Python Pandas, NumPy, React, JavaFX, Javascript frameworks (e.g., Vue), Vector CANoe, and Codemeter. Collaboration and Integration: o Work closely with team members to ensure seamless integration and functionality of tools. o Participate in code reviews and provide constructive feedback to peers. o Collaborate with other departments to understand their tool requirements and provide effective solutions. o Work within a small team of 3 members. o Collaborate with roles such as Tool Developer and Tool Tester. CI/CD and DevOps Practices: o Implement and maintain CI/CD pipelines using tools like GIT, HTTP, Jenkins, SVN, GitLab, Kubernetes, Docker, AWS, S3, and Artifactory. o Ensure the tools are integrated into the CI/CD pipeline for automated testing and deployment. o Monitor and optimize the performance of tools in the development and production environments. Project Management: o Assign tasks and responsibilities according to different V-Modell phases. o Track project progress and ensure timely delivery of tool prototypes. o Document development processes, tool functionalities, and user guides. Continuous Improvement: o Stay updated with the latest industry trends and technologies. o Continuously improve tools based on user feedback and technological advancements. o Participate in training and professional development to enhance skills and knowledge. Quality Assurance: o Conduct thorough testing of tools to ensure they meet quality standards. o Identify and fix bugs and issues in a timely manner. o Ensure tools are reliable, efficient, and user-friendly. Support and Maintenance: o Provide ongoing support and maintenance for developed tools. o Troubleshoot and resolve issues reported by users. o Update tools to accommodate new requirements and technologies. Programming Languages: Python: Extensive experience in writing and maintaining Python code, with a strong understanding of libraries such as Pandas and NumPy for data manipulation and analysis. CAPL: Proficient in using CAPL (CAN Access Programming Language) for developing and testing automotive communication protocols. Java: Solid experience in Java programming, including the use of JavaFX for building rich client applications. Typescript: Skilled in writing and maintaining Typescript code, with a good grasp of its features and benefits over JavaScript. Frameworks and Middleware: Python Pandas and NumPy: In-depth knowledge of these libraries for data analysis and manipulation. React: Experience in building user interfaces using React, with an understanding of its component-based architecture. JavaFX: Proficient in using JavaFX for creating desktop applications with rich graphical interfaces. Javascript Frameworks (e.g., Vue): Familiarity with various JavaScript frameworks, particularly Vue, for building interactive web applications. Vector CANoe: Experience with Vector CANoe for simulation, testing, and analysis of automotive networks. Codemeter: Knowledge of Codemeter for software license management and protection. Infrastructure (CI/CD, DevOps): GIT: Proficient in using GIT for version control, including branching, merging, and resolving conflicts. HTTP: Understanding of HTTP protocols and their application in web development and API integration. Jenkins: Experience in setting up and maintaining CI/CD pipelines using Jenkins. SVN: Familiarity with SVN (Subversion) for version control. GitLab: Knowledge of GitLab for source code management and CI/CD. Kubernetes: Experience in deploying and managing containerized applications using Kubernetes. Docker: Proficient in using Docker for containerization and orchestration of applications. AWS: Understanding of AWS services and their application in cloud computing. S3: Experience with Amazon S3 for scalable storage solutions. Artifactory: Knowledge of Artifactory for managing binary artifacts and dependencies. Domain Expertise: Tool Development: Specialized knowledge in developing tools for various applications, with a focus on creating efficient and user-friendly solutions. V-Modell Phases: Experience in working through different phases of the V-Modell, including requirements analysis, design, implementation, testing, and maintenance. Project Management: Task Assignment: Ability to assign tasks and responsibilities according to project phases and team capabilities. Progress Tracking: Experience in tracking project progress and ensuring timely delivery of milestones.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more Based in SF and Hyderabad, we are a young, fast-moving team on a mission to build AI for Good, driving innovation and positive societal impact We are seeking Python AI Trainers with a minimum of 1 year of experience to join us in Hyderabad and contribute to impactful AI training projects Key Responsibilities: Develop, review, and refine Python code snippets, explanations, and logic-based tasks for AI training Training, fine-tuning the LLMs using RLHF techniques Ensure code quality, correctness, and clarity in Python-related datasets Evaluate and improve AI-generated Python outputs Debug and correct errors in AI-driven programming responses Collaborate with teams to enhance AI's understanding of Python concepts and best practices Required Qualifications: 1+ year of experience in Python-related fields (Software Development, Teaching, Data Science, AI Training, etc ) Strong coding and analytical skills Proficiency in core Python concepts (data types, functions, loops, OOP, file handling, etc ) Ability to critically analyze AI-generated code and provide constructive feedback Familiarity with libraries like NumPy, Pandas, or algorithmic thinking is a plus Why Join Us Work in a competitive environment Hands-on experience working on real-world AI training tasks Competitive Stipend (Rs 25-35k) Shape the future of AI with Soul AI!

Posted 2 weeks ago

Apply

1.0 - 4.0 years

14 - 19 Lacs

Mumbai

Work from Office

Naukri logo

Overview Development of Scalable and Efficient AI-Driven Solutions Create AI solutions that address complex business challenges. Ensure solutions are scalable to handle large data volumes and efficient in performance. Development and Maintenance of Index Data Platforms Build and sustain platforms that generate and manage Index data. Integrate data from various sources, maintain data accuracy, and ensure reliability. Development of Data Science-Based Solutions for Quality Check of Data Points Ensure the accuracy and reliability of delivered data through advanced data science techniques. Implement metrics, detect anomalies, and validate data points to maintain high data quality. Responsibilities 1. Development of Scalable and Efficient AI-Driven Solutions Create AI/ML solutions that address complex business challenges. Understand business use cases and identify where and how AI can add value. Design, train, and validate machine learning models tailored to specific business needs. Design solutions leveraging generative AI and LLM. Build cost effective,scalable and high quality data solutions which will be used for critical Index products Ensure solutions can handle large volumes of data and scale with business growth. Optimize models and algorithms for performance and resource utilization. 2. Development of AI and Data Science-Based Solutions for Quality Check of Data Points delivered to downstream Ensure the accuracy and reliability of data through advanced data science techniques. Define and implement metrics to assess data quality. Develop models to detect anomalies and inconsistencies in data. Create automated processes to validate data points against predefined standards. Continuously refine quality check methods based on feedback and new insights. Qualifications Artificial Intelligence (AI) and Machine Learning (ML) Understanding of machine learning algorithms, deep learning, neural networks, and reinforcement learning. Understanding and experience on Generative AI. Proficiency in working with large language models like GEMINI,GPT, BERT, and their variants. Experience in fine-tuning and deploying these models for various applications. Familiarity with frameworks like TensorFlow, PyTorch, and Keras. Programming Languages Python : Proficiency in Python for data manipulation, model development, and automation. Experience with libraries such as NumPy, pandas, scikit-learn, and matplotlib. Java (Good to have) : Strong skills in Java for building scalable and efficient applications. Knowledge of Java frameworks like Spring. Oracle (Good to have) : Expertise in Oracle databases, including SQL and PL/SQL. Experience in database design, optimization, and management. Data Enginering and Cloud Solutions Proficiency in cloud platforms like Azure or Google Cloud. Leveraging cloud services for flexibility, scalability, and cost-efficiency Experience with data warehousing solutions and other cloud based data storage solutions Experience in Kubernetes and service or event driven architectures What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

Overview Esri is the world leader in geographic information systems (GIS) and developer of ArcGIS, the leading mapping and analytics software used in 75 percent of Fortune 500 companies. At the Esri R&D Center-New Delhi, we are applying cutting-edge AI and deep learning techniques to revolutionize geospatial analysis and derive insight from imagery and location data. We are passionate about applying data science and artificial intelligence to solve some of the world’s biggest challenges. Our team develops tools, APIs, and AI models for geospatial analysts and data scientists, enabling them to leverage the latest research in spatial data science, AI and geospatial deep learning. As a Data Scientist, you will develop deep learning models using libraries such as PyTorch and create APIs and tools for training and deploying them on satellite imagery. If you are passionate about deep learning applied to remote sensing and GIS, developing AI and deep learning models, and love maps or geospatial datasets/imagery, this is the place to be! Responsibilities Develop tools, APIs and pretrained models for geospatial AI Integrate ArcGIS with popular deep learning libraries such as PyTorch Develop APIs and model architectures for computer vision and deep learning applied to geospatial imagery Author and maintain geospatial data science samples using ArcGIS and machine learning/deep learning libraries Curate and pre/post-process data for deep learning models and transform it into geospatial information Perform comparative studies of various deep learning model architectures Requirements 2 to 6 years of experience with Python, in data science and deep learning Self-learner with coursework in and extensive knowledge of machine learning and deep learning Experience with Python machine learning and deep learning libraries such as PyTorch, Scikit-learn, NumPy, Pandas Expertise in one or more of the following areas: Traditional and deep learning-based computer vision techniques with the ability to develop deep learning models for computer vision tasks (image classification, object detection, semantic and instance segmentation, GANs, super-resolution, image inpainting, and more) Convolutional neural networks such as VGG, ResNet, Faster R-CNN, Mask R-CNN, and others Transformer models applied to computer vision Expertise in 3D deep learning with Point Clouds, meshes, or Voxels with the ability to develop 3D geospatial deep learning models, such as PointCNN, MeshCNN, and more Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries Experience with hyperparameter-tuning and training models to a high level of accuracy Bachelor's in computer science, engineering, or related disciplines from IITs and other top-tier engineering colleges Existing work authorization for India Recommended Qualifications Experience applying deep learning to satellite or medical imagery or geospatial datasets Familiarity with ArcGIS suite of products and concepts of GIS About Esri At Esri, diversity is more than just a word on a map. When employees of different experiences, perspectives, backgrounds, and cultures come together, we are more innovative and ultimately a better place to work. We believe in having a diverse workforce that is unified under our mission of creating positive global change. We understand that diversity, equity, and inclusion is not a destination but an ongoing process. We are committed to the continuation of learning, growing, and changing our workplace so every employee can contribute to their life’s best work. Our commitment to these principles extends to the global communities we serve by creating positive change with GIS technology. For more information on Esri’s Racial Equity and Social Justice initiatives, please visit our website here. If you don’t meet all of the preferred qualifications for this position, we encourage you to still apply! Esri is an equal opportunity employer (EOE) and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need reasonable accommodation for any part of the employment process, please email askcareers@esri.com and let us know the nature of your request and your contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this e-mail address. Esri Privacy Esri takes our responsibility to protect your privacy seriously. We are committed to respecting your privacy by providing transparency in how we acquire and use your information, giving you control of your information and preferences, and holding ourselves to the highest national and international standards, including CCPA and GDPR compliance. Requisition ID: 2025-2393 Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who are we? Prismforce (www.prismforce.com) is a Vertical SaaS product company solving critical challenges in the Talent Supply Chain faced by all Technology, R&D/ Engineering and IT Services companies globally. The product suite is powered by multiple artificial intelligence (AI/ ML) engines designed to accelerate business impact by digitizing core vertical workflows resulting in flexibility in operations, speed in decision making and much improved growth and profitability. Prismforce aspires to be the #1 industry cloud/ SaaS platform for tech services and tech talent organizations. What makes us different? We hold the first mover advantage in the Vertical SaaS space, specifically designed to tackle the Talent Supply Chain challenges of the IT services industry. Our innovative solution sets us apart, making us the go-to partner for IT services companies seeking to optimize their talent supply chain challenges. We are building distinctive products with futuristic features and clients have preferred us over traditional ERP and other point solutions. We are supported and mentored by an advisory board composed of ex CXOs from top Global IT companies and industry thought leaders. Our founding team comprises deep domain and tech and product experts who are Ex- McKinsey, Deloitte, Amazon, Infosys , TCS, Uber alums with a passion to reimagine this space with distinctive products. We've experienced rapid growth, expanding to a strong team of 150+ members across Mumbai/Pune/Bangalore & Pune locations. We are a Series A Sequoia funded startup with global IT companies as our clients in our second year of operations. Job Description Role: Data Analyst part of AIML Team Location: Mumbai/Bangalore/Pune/Kolkata Responsibilities Identify relevant data sources and combine multiple data streams to derive business insights. Automate data collection, cleaning, and reporting processes. Pre-process structured and unstructured datasets to make them analysis-ready. Work with large datasets to support business decision-making through insightful analytics. Build dashboards, reports, and visualizations to track KPIs and performance metrics. Collaborate with cross-functional teams including Product, Marketing, and Engineering to understand business needs and translate them into analytical solutions. Perform ad hoc and deep-dive analyses to support key strategic initiatives. Conduct A/B testing and cohort analysis to identify trends and optimize product performance. Assist in building scalable data pipelines and data models for regular reporting needs. Participate in problem-solving sessions and present data-driven recommendations to stakeholders. Requirements Bachelor's degree in a quantitative field such as Computer Science, Statistics, Mathematics, Engineering, or related disciplines. 2–4 years of hands-on experience in a data analytics/data analysis role. Proficiency in SQL for data extraction, manipulation, and transformation. Solid understanding of Python for data analysis and automation (Pandas, NumPy, etc.). Strong Excel skills including formulas, pivot tables, and data visualization. Ability to work with large datasets and derive actionable insights. Excellent analytical and problem-solving abilities. Strong communication skills to present insights clearly to both technical and non-technical stakeholders. Self-starter with the ability to work in a fast-paced and collaborative environment. Visualization tool knowledge is preferred – Tableau, PowerBI, Micro Strategy, Looker Good to have: Experience with Mixpanel or other product analytics tools. Required Skills SQL, Python, Excel, Data Analysis, Data Cleaning, Data Visualization, Business Intelligence, Dashboarding, Reporting, Problem Solving, Stakeholder Management, Communication Skills, Product Analytics, A/B Testing, Cohort Analysis, Data Pipeline, Data Modeling, Mixpanel (Good to have). Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a passionate and curious Data Engineer (Entry-Level) to join our DataTech team. This is an excellent opportunity for fresh graduates or early-career professionals to build a strong foundation in data engineering by working on real-world data problems, building robust pipelines, and collaborating across teams. Responsibilities Assist in developing scalable and optimized data pipelines for extraction, transformation, and loading (ETL). Write clean and efficient Python scripts and SQL queries for data processing and analysis. Work on integrating multiple data sources (databases, APIs, flat files, etc. ) Support team members in ensuring data quality, accuracy, and consistency. Collaborate with analysts, data scientists, and engineers to deliver data solutions. Contribute to automation initiatives and help build internal data tools. Participate in code reviews, documentation, and team meetings to learn best practices in software and data engineering. Assist in building computer vision or different AI models. Requirements B. E. /B. Tech in Computer Science, Information Technology, or related field (Tier I college background is a plus). Strong foundation in: Python (Pandas, NumPy, basic scripting), SQL (writing basic to intermediate queries). Exposure to: Git and Linux Shell Scripting, Data visualization tools or dashboards (e. g., Tableau, Power BI - optional). Bonus if you've explored tools like Airflow, BigQuery, AWS S3/Redshift, or Firebase. Built projects or done internships related to data engineering, analytics, or backend systems. Strong logical thinking, curiosity to learn, and willingness to dive into data problems. This job was posted by Payal Verma from Box8. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

We are seeking an experienced Python Backend Engineer to join our team in building high-performance, scalable backend systems for algorithmic trading. The ideal candidate will have strong expertise in developing exchange integrations, optimizing order management systems, and ensuring low-latency execution. Responsibilities Design and develop scalable backend systems for real-time trading applications. Build and optimize order management systems with smart order routing capabilities. Integrate multiple exchange APIs(REST, WebSockets, FIX protocol) for seamless connectivity. Develop high-performance execution engines with low-latency trade execution. Implement a real-time monitoring, logging, and alerting system to ensure reliability. Design fault-tolerant and distributed architectures for handling large-scale transactions. Work on message queues (RabbitMQ, Kafka) for efficient data processing. Ensure system security and compliance with financial industry standards. Collaborate with quant researchers and business teams to implement trading logic. Requirements Strong proficiency in Python (4+ years)with a focus on backend development. Expertise in API development and integration using REST, WebSockets, and FIX protocol. Experience with asynchronous programming(asyncio, aiohttp) for high-concurrency applications. Strong knowledge of database systems(MySQL, PostgreSQL, MongoDB, Redis, time-series databases). Proficiency in containerization and orchestration(Docker, Kubernetes, AWS). Experience with message queues(RabbitMQ, Kafka) for real-time data processing. Knowledge of monitoring tools(Prometheus, Grafana, ELK Stack) for system observability. Experience with scalable system design, microservices, and distributed architectures. Experience with real-time data processing and execution. Experience developing backtesting engines capable of processing millions of events per second. Understanding of rule-based trading engines supporting multiple indicators and event processing. Experience in data processing libraries: pandas, numpy, scipy, scikit-learn, and polars. Knowledge of parallel computing frameworks(Dask) for high-performance computation. Familiarity with automated testing frameworks for trading strategies and system components. Experience in data visualization tools for trading strategy analysis and performance metrics. Knowledge of quantitative trading strategies and algorithmic trading infrastructure. Contributions to open-source backend or data engineering projects. This job was posted by Shivangi Mathur from Unifynd. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a skilled Data Analyst with expertise in Power BI and AI tools to join our dynamic team. In this role, you will be responsible for analyzing complex datasets, creating visually compelling reports and dashboards, and leveraging artificial intelligence to derive actionable insights. You will work closely with cross-functional teams to drive data-driven decision-making and improve operational efficiencies. Opportunity Maersk is a global leader in integrated logistics and have been industry pioneers for over a century. Through innovation and transformation, we are redefining the boundaries of possibility, continuously setting new standards for efficiency, sustainability, and excellence. At Maersk, we believe in the power of diversity, collaboration, and continuous learning and we work hard to ensure that the people in our organisation reflect and understand the customers we exist to serve. With over 100,000 employees across 130 countries, we work together to shape the future of global trade and logistics. Join us as we harness cutting-edge technologies and unlock opportunities on a global scale. Together, let's sail towards a brighter, more sustainable future with Maersk. We Offer A tailored onboarding and induction with access to a wide range of training schemes to help with your learning and development. Setting you up for success is important to us. An annual bonus based on company performance. Every colleague at Maersk has access to a fantastic range of wellbeing, mental health support and financial advice through our Employee Assistance Program. For eligible roles, we also offer short-term incentives to recognise, appreciate and reward your work for delivering outstanding results. Whilst the role is advertised as full-time, we would be happy to discuss possible flexible working options and what that might look like for you. Job Summary: We are seeking a numbers-driven Data Analyst with strong proficiency in Power BI and Excel , as well as a solid understanding of AI tools to support our data-driven operations. In this role, you will analyze complex datasets, generate accurate operational reports, and deliver actionable insights that inform strategic and tactical decision-making across the organization. Key Responsibilities: Operational Reporting & Data Analysis: Collect, clean, and analyze large datasets from various sources to deliver actionable insights. Accurately understand and report on operational metrics and business performance, highlighting trends and anomalies. Create interactive and automated Power BI dashboards that visualize key performance indicators (KPIs) and support daily operational needs. Develop Excel-based reports for data exploration, quick analyses, and business reviews. Power BI & Analytical Tools Implementation: Design and implement Power BI solutions using Power Query, Power Pivot, and DAX. Collaborate with stakeholders to tailor reports and dashboards to business needs. Drive the adoption and usage of Power BI across departments by creating user-friendly and insightful visualizations. AI Tools & Advanced Analytics: Apply machine learning techniques and AI tools (e.g., Scikit-learn, Azure AI) to extract patterns and trends from datasets. Develop predictive models to support forecasting, customer behavior analysis, or performance optimization. Leverage AI capabilities to enhance data interpretation and automate insight generation. Cross-Functional Collaboration: Work with stakeholders from operations, marketing, finance, and other teams to translate business questions into analytical solutions. Provide strategic and operational support by delivering data in a format that is easy to understand and act on. Data Governance & Quality Control: Ensure data integrity and consistency through rigorous validation, cleansing, and quality checks. Maintain compliance with internal data governance and security standards. Continuous Improvement & Learning: Continuously refine reporting systems and tools to improve accuracy, usability, and performance. Stay up to date with developments in Power BI, Excel, AI tools, and industry best practices. Required Skills and Qualifications: Proven experience in data analysis and operational reporting. Proficient in Power BI (Power Query, DAX, Power Pivot) and Microsoft Excel (advanced formulas, pivot tables, lookups, etc.). Strong analytical mindset with attention to detail and accuracy in reporting. Skilled in working with large datasets and deriving meaningful insights. Experience in SQL for data extraction and transformation. Basic to intermediate knowledge of Python or R for data analysis (e.g., Pandas, NumPy, Seaborn). Effective communicator able to present data clearly to non-technical audiences. Team player with the ability to work cross-functionally and manage multiple priorities. Preferred Qualifications: Exposure to machine learning frameworks and predictive analytics. Understanding of data governance and data privacy principles. Key Success Metrics: Timely and accurate delivery of operational and business-critical reports. Clear and actionable data insights that drive process improvements and better decision-making. Enhanced adoption of Power BI and Excel-based reporting tools across business units. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 - 0 Lacs

Hyderābād

On-site

Join Our Team as a Python Trainer with Data Science Expertise! Are you a Python pro with a passion for Data Science? Do you love breaking down complex concepts into simple, engaging lessons that light up learners’ minds? We’re on the hunt for a dynamic Python Trainer with Data Science skills to empower the next wave of data-driven innovators! What You’ll Do: Deliver hands-on, interactive training sessions covering Python programming and essential Data Science techniques. Develop cutting-edge curriculum blending Python fundamentals with real-world Data Science applications — think machine learning, data visualization, and more! Inspire and mentor learners at all levels to confidently tackle data challenges. Keep training content fresh with the latest tools, libraries, and industry trends. Collaborate with a vibrant team dedicated to transforming education and technology. What You Bring: Deep expertise in Python and solid experience with Data Science (pandas, NumPy, scikit-learn, matplotlib, etc.). A talent for teaching complex concepts in an easy-to-understand, engaging way. Strong communication and storytelling skills to make data come alive. Passion for nurturing talent and driving learner success. Curiosity to stay ahead of evolving trends in Python and Data Science. Why You’ll Love Working With Us: Be at the forefront of tech education, shaping data leaders of tomorrow. Dynamic, innovative environment that encourages creativity and growth. Competitive pay, flexible work arrangements, and amazing career growth opportunities. Join a community passionate about tech, learning, and impact. Job Type: Full-time Pay: ₹10,154.70 - ₹35,735.95 per month Schedule: Day shift Language: English (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

40.0 years

4 - 9 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-213477 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 06, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are looking for a skilled MDM Testing Associate Analyst who will responsible for ensuring the quality and integrity of Master Data Management (MDM) applications through rigorous testing processes. This role involves collaborating with cross-functional teams to define testing objectives, scope, and deliverables, and to ensure that master data is accurate, consistent, and reliable. and comply with Amgen’s standard operating procedures, policies, and guidelines. Your expertise will be instrumental in ensuring quality and adherence to required standards so that the engineering teams can build and deploy products that are compliant. Roles & Responsibilities: Test Planning: Develop and implement comprehensive testing strategies for MDM applications, including defining test objectives, scope, and deliverables. This includes creating detailed test plans, test cases, and test scripts. Test Execution: Execute test cases, report defects, and ensure that all issues are resolved before deployment. This involves performing functional, integration, regression, and performance testing. Data Analysis: Analyze data to identify trends, patterns, and insights that can be used to improve business processes and decision-making. This includes validating data accuracy, completeness, and consistency. Collaboration: Work closely with the MDM, RefData and DQDG team and other departments to ensure that the organization’s data needs are met. This includes coordinating with data stewards, data architects, and business analysts. Documentation: Maintain detailed documentation of test cases, test results, and any issues encountered during testing. This includes creating test summary reports and defect logs. Quality Assurance: Develop and implement data quality metrics to ensure the accuracy and consistency of master data. This includes conducting regular data audits and implementing data cleansing processes. Compliance: Ensure that all master data is compliant with data privacy and protection regulations. This includes adhering to industry standards and best practices for data management. Training and Support: Provide training and support to end-users to ensure proper use of MDM systems. This includes creating user manuals and conducting training sessions Stay current on new technologies, validation trends, and industry best practices to improve validation efficiencies. Collaborate and communicate effectively with the product teams. Basic Qualifications and Experience: Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: 2+ years of experience in MDM implementations, primarily with testing (pharmaceutical, biotech, medical devices, etc.) Extensive experience on ETL/ELT and MDM testing (Creating test plan, test scripts and execution of test scripts and bugs tracking/reporting in JIRA) Informatica MDM: Proficiency in Informatica MDM Hub console, configuration, IDD (Informatica Data Director), IDQ, and data modeling or Reltio MDM: Experience with Reltio components, including data modeling, integration, validation, cleansing, and unification. Advanced SQL: Ability to write and optimize complex SQL queries, including subqueries, joins, and window functions. Data Manipulation: Skills in data transformation techniques like pivoting and unpivoting. Stored Procedures and Triggers: Proficiency in creating and managing stored procedures and triggers for automation. Python: Strong skills in using Python for data analysis, including libraries like Pandas and NumPy etc. Automation: Experience in automating tasks using Python scripts. Machine Learning: Basic understanding of machine learning concepts and libraries like scikit-learn. Strong problem-solving and analytical skills Excellent communication and teamwork skills Good-to-Have Skills: ETL Processes: Knowledge of ETL processes for extracting, transforming, and loading data from various sources. Data Quality Management: Skills in data profiling and cleansing using tools like Informatica. Data Governance: Understanding of data governance frameworks and implementation. Data Stewardship: Ability to work with data stewards to enforce data policies and standards. Selenium: Experience with Selenium for automated testing of web applications. JIRA: Familiarity with JIRA for issue tracking and test case management. Postman: Skills in using Postman for API testing. Understanding of compliance and regulatory considerations in master data. In depth knowledge of GDPR and HIPPA guidelines. Professional Certifications : MDM certification (Informatica or Reltio) SQL Certified Agile or SAFe certified Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

2.0 years

8 - 9 Lacs

Hyderābād

On-site

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Corporate Technology, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities Executes standard software solutions, design, development, and technical troubleshooting Building pipelines in spark, tuning spark queries Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems Stay up-to-date with the latest advancements in GenAI and LLM technologies and incorporate them into our data engineering practices. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Background with Machine Learning Frameworks and Big Data technologies such as Hadoop. Strong experience in programming languages such as Java or Python Python Machine Learning library and ecosystem experience ( Pandas and Numpy etc) Experience with Cloud technologies such as AWS or Azure. Experience working with databases such as Cassandra, MongoDB or Teradata Experience across the whole Software Development Life Cycle Exposure to agile methodologies such as CI/CD, Application Resiliency, and Security Experience with Generative AI and Large Language Models, and experience integrating these technologies into data workflows Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies

Posted 2 weeks ago

Apply

3.0 years

10 - 35 Lacs

India

On-site

Job Summary We are seeking a results-driven Data Scientist to join our growing analytics and data science team. The ideal candidate will have hands-on experience working with Big Data platforms , designing and deploying AI/ML models , and leveraging Python and AWS services to solve complex business problems. Key Responsibilities Design, develop, and implement machine learning and deep learning models for predictive analytics and automation. Work with large-scale structured and unstructured datasets using Big Data technologies (e.g., Spark, Hadoop, Hive). Build scalable data pipelines and model deployment workflows in AWS (S3, Lambda, SageMaker, EMR, Glue, Redshift). Perform advanced statistical analysis, hypothesis testing, and A/B testing to support data-driven decisions. Develop, clean, and validate datasets using Python , Pandas, NumPy, and PySpark. Required Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, Engineering, or a related field. 3+ years of experience in a Data Science role, preferably in a cloud-first environment. Proficiency in Python and its data science libraries (e.g., scikit-learn, TensorFlow/PyTorch, pandas, NumPy). Hands-on experience with Big Data tools such as Spark, Hive, or Hadoop. Job Types: Full-time, Permanent Pay: ₹1,005,888.28 - ₹3,586,818.50 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Morning shift Work Location: On the road

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Haryana

On-site

Date: 30 May 2025 Location: HR, IN, 122009 Company: firstsourc Company Profile Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Process Mining – DE / DS KEY PERFORMANCE INDICATORS: Assessment & Responsibilities Experience in Data Connection, extraction from different systems including both clouds based and On-Premises systems (e.g. Azure, Oracle, SAP, BMC, SNOW, Salesforce etc.) and transformation & loading into the process mining tool. Support validation of data (counts, values between source systems and process mining tool). Understanding of process insights by creating KPIs and actions, identify process inefficiencies, and understand the root causes. Experience / basic knowledge working with ERP and other similar systems (e.g. Azure, Oracle, SAP, BMC) & understanding of basic data structure, reports, formats etc. from multiple source systems. Develop workflows to monitor processes, detect anomalies and turn those insights into real-time automated preventive or corrective actions using Action-engine, Action-flows and other capabilities (desired). Understanding of data modelling, data structure, reports, formats etc. from multiple source systems. Process mapping and identifying nonvalue add steps to create lean and agile processes. Basic knowledge on ERP processes like, HLS / FNA/ /Banking / E&U / HRO etc. Work alongside both technical and non-technical stakeholders to understand business challenges to help design process mining initiatives and prioritize the requests. Critical thinking and creative problem-solving skills, as well as the ability to apply theoretical concepts and best practices to solve business problems. Basic understanding of IT application lifecycle and methods. Able to visualize & create dashboards on KPIs in line with client expectations. Required Technical Skills 3 to 5 years of experience with analysis, ETL, Data Engineering, etc platforms. Experience with ETL / ELT and BI tools (e.g., Tableau, Power BI etc.) Experience in SQL / PQL scripting & knowledge of data mining, should apply complex queries to build the transformation e.g. Joins, union, windows f(x) etc. Knowledge of process improvement techniques / tools and Process Mining / Analytics. Good knowledge of Python Scripting should be knowing about (Numpy, Pandas, Seaborn, Matplotlib, SKLearn etc). Experience in any Process / Task Mining Tool (e.g Celonis, Apromore, Soroco etc). Understanding of enterprise systems (ERP, CRM, etc.) OR CRM/Order Management Systems (Preferably SAP / Oracle / MS Dynamics/ SNOW/ Salesforce / Remedy). Experience in ML modelling via Python or R will be plus. Soft Skills Strong communication and presentation skills & quick learner. Ability to learn new technologies and process improvement techniques. Ability to work in various time zones to support various global rollouts. Education University degree in Computer Science / Information Technology/ Data Scientist Diploma / Certification. ️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Cochin

On-site

Educational Qualification:- Bachelor of Technology in Computer Science or related field Years of Experience : 8 - 12 Years Location:- Kochi (Work From Office) Required Skills: 8 + years of software development experience with at least 4+ years of experience as a full-time data scientist, machine learning researcher, or machine learning engineer Fluency in R, Python, and experience with common data science libraries, such as Pandas, numpy, and scikit-learn Experience using neural networks and/or Deep Learning techniques and relevant Python platforms/APIs such as Tensorflow, PyTorch, and Keras in a production setting Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Experience with NLP (especially sentiment analysis, entity recognition, and use of the SpaCy library) Experience with SQL databases and SQL queries Self-starting, with the interest and passion to contribute in a fast-paced start-up like environment Experience working on or with cloud platforms (AWS, GCP, Azure) Roles and Responsibilities Build, deploy, monitor, and manage Machine Learning models Identify valuable input features and implement a means for calculating them (feature engineering) Investigate, clean, analyze, and otherwise prepare new data sources for use in higher- order analysis or as input into model features Data analysis and presentation using data visualization techniques, sometimes to broader audiences within or outside of Terawe Transition models from research to production, working with Data Engineers Fine-tune model performance in a production setting using adaptive/incremental machine learning techniques Map customer and/or product requirements to correct modeling, analysis, and/or visualization approaches Recommend ways to improve data reliability and quality and implement them Collaborate with Product development team to plan new features Terawe is an IT solutions and services form, based in Bellevue, WA USA, with offices in India, Australia, Canada and Ireland. We are focused on developing industry wide solutions on cloud infrastructure, productivity, data platforms and artificial intelligence across multiple verticals such as Retail, Education, Finance, Manufacturing and Healthcare, and work closely with several enterprise customers globally. https://www.terawe.com/ Job Types: Full-time, Permanent Pay: ₹1,800,000.00 - ₹2,400,000.00 per month Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Puducherry, India

On-site

Linkedin logo

Responsibilities · Work plan development for the Image processing project · Geometric modeling and numerical simulation of the project prototype · Handling design, analysis and development of the algorithm · Data exploration through scientific methods · Maintaining a record of field observations during development phase · Image processing software documentation · Monitoring the Image processing software performance periodically after every updates · Enhancing the existing methodologies · Feature updating for the existing products Qualifications : B.Tech (or) B.E. - Electronics and Communication Engineering Candidates requirements · Experience in design and development of algorithm · Hands on experience in conventional image processing and pattern recognition algorithms · Strong experience of Python programming language · Strong working knowledge of Matlab, Opencv tools · Knowledge in linear algebra · Good analytics and problem solving skills Relevant Tools and Libraries - OpenCV, PCL (Point Cloud Library), Open3D, ROS, MATLAB - Blender, Meshlab, CloudCompare (optional) - Python, C++, CUDA - scikit-image, NumPy, PIL Core Competencies 1. 3D Vision & Estimation - 3D Scanning / Reconstruction, Point Cloud Processing, SLAM, Photogrammetry, Pose Estimation, Angle Estimation, Sensor Fusion, Camera Calibration. 2. Image Processing - Edge Detection (Canny, Sobel, Laplacian), Pattern Recognition (Template Matching, HOG, SIFT, etc.), Image Segmentation, Morphological Operations, Image Registration, Object Tracking, Histogram Equalization / Contrast Enhanceme. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali

On-site

Chicmic Studios Job Role: Data Scientist Experience Required: 3+ Years Skills Required: Data Science, Python, Pandas, Matplotlibs Job Description: We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus. Roles & Duties: Analyze and process large datasets using Python and Pandas. Develop and optimize machine learning models for predictive analytics. Create data visualizations using Matplotlib and Seaborn to support decision-making. Perform data cleaning, feature engineering, and statistical analysis. Work with structured and unstructured data to extract meaningful insights. Implement and fine-tune NER models for specific use cases (if required). Collaborate with cross-functional teams to drive data-driven solutions Required Skills & Qualifications: Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.). Experience in data analysis, statistical modeling, and machine learning. Hands-on expertise in data visualization using Matplotlib and Seaborn. Understanding of SQL and database querying. Familiarity with NLP techniques and NER models is a plus. Strong problem-solving and analytical skills. Contact: 9875952836 Office Address: F273, Phase 8B industrial Area, Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 - 0 Lacs

Mohali

On-site

Job Title: Data scientist Experience: 3-5 Years Type: Full-time Timings: 12pm-9pm Priority: Immediate Joiners Preferred About Us: Primotech is a leading global IT solutions company specializing in AI/ML, Data Engineering, Full Stack Development, DevOps, Cloud Services, and more. We deliver cutting-edge software solutions to enterprises, startups, and SMBs, leveraging agile methodologies and deep technical expertise to drive innovation and business growth. What We Are Looking for: We are looking for a highly skilled Python Developer with a minimum of 3 years of experience in backend development, and solid exposure to AI and Machine Learning concepts. The ideal candidate should be proficient in modern Python web frameworks such as Django, Flask, or FastAPI, and also comfortable working with data-focused libraries like NumPy, Pandas, Scikit-learn, and more. Key Responsibilities: * Develop and maintain RESTful APIs and backend services using Python frameworks like Flask, or FastAPI.or Django(Optional). * Write clean, modular, and scalable code following best practices. * Integrate AI/ML models into backend systems using libraries such as Scikit-learn, TensorFlow, or PyTorch. * Perform data manipulation and processing using NumPy, Pandas, and similar libraries. * Work with SQL and NoSQL databases like PostgreSQL, MySQL, MongoDB, and Firestore. * Implement caching (e.g., Redis), structured logging, and error-handling for performance optimization. * Collaborate with frontend developers, DevOps, QA, and data science teams in Agile workflows. * Write and maintain unit tests using pytest or unittest; participate in code reviews. * Debug production issues and continuously improve application performance. * Use Git for version control and work with CI/CD pipelines. * (Preferred) Deploy applications on cloud platforms (AWS, GCP, or Azure). * Ensure secure API development with OAuth2, JWT, and other authentication mechanisms. Required Skills: * Strong proficiency in Python and OOP principles . * Experience with at least one major Python web framework (Django, Flask , FastAPI). * Practical experience with AI/ML development and using libraries such as Scikit-learn, TensorFlow, or PyTorch. * Good knowledge of data analysis libraries like NumPy, Pandas, and Matplotlib . * Familiarity with ORMs such as SQLAlchemy or Django ORM. * Experience with Git, Docker, and Linux environments. * Understanding of CI/CD workflows, application security, and scalable system architecture. * Strong debugging and problem-solving skills. * Good communication and teamwork capabilities. Interested candidates may share their resume at disha.mehta@primotech.com For more information kindly visit our website https://www.primotech.com/ Job Type: Full-time Pay: ₹45,000.00 - ₹85,000.00 per month Benefits: Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Education: Bachelor's (Preferred) Experience: Python: 3 years (Required) Language: English (Required) Work Location: In person Expected Start Date: 09/06/2025

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

India

On-site

V-ACCEL AI DYNAMICS PVT. LTD. Job Title: AI/ML Engineer Experience: 2-3 Years Location: Tidel Park, Chennai Interview Mode: F2F Job Summary: We are looking for a dynamic AI/ML Engineer with 2–3 years of hands-on experience in machine learning, analytics, and AI automation. The ideal candidate has practical exposure to Agentic AI systems, strong data analytics capabilities, and experience using n8n to build intelligent, event-driven workflows. You will play a critical role in developing AI solutions that power our next-gen SaaS products. This position offers a unique opportunity to work at the intersection of automation, machine learning, and intelligent agents within a high-impact, innovation-driven environment. Key Responsibilities ● Design, develop, and deploy machine learning models for classification, regression, clustering, and other predictive tasks. ● Build and maintain scalable data pipelines; conduct exploratory data analysis (EDA) to derive actionable business insights. ● Architect and implement agentic AI workflows, integrating vector databases and memory modules to enhance system intelligence. ● Develop event-driven automation using n8n, integrating third-party APIs, webhooks, and custom scripts. ● Collaborate with frontend and backend development teams to integrate ML models and AI functionalities into production-grade SaaS applications. Tech Stack & Tools Programming & ML Frameworks: ● Python (Core Language) ● scikit-learn, TensorFlow, PyTorch ● Pandas, NumPy, StatsModels Automation & Agentic AI: ● n8n (Workflow Automation) ● LangChain, AutoGen, CrewAI (Agent Frameworks) ● OpenAI, Gemini, Claude (LLM Providers) Vector & Relational Databases: ● Pinecone, Weaviate, Chroma ● PostgreSQL, MongoDB, MySQL Data & Visualization: ● SQL, Power BI, Tableau ● Matplotlib, Seaborn API & Cloud Integration: ● RESTful APIs, Webhooks, Postman ● AWS (S3, Lambda, EC2), GCP, Azure (Basic Familiarity) DevOps & Deployment (Optional but Preferred): ● Docker, GitHub Actions, Jenkins (Basic CI/CD) ● Streamlit, FastAPI (for ML API deployment) Preferred Qualifications ● Experience with Retrieval-Augmented Generation (RAG) techniques. ● Prior involvement in building SaaS platforms or AI-powered products. ● Familiarity with collaboration tools such as Jira, Notion, or similar. Why Join Us ● Be part of an innovation-led startup shaping the future of AI automation. ● Lead critical AI modules with real-world business impact. ● Thrive in a fast-paced, collaborative environment focused on continuous learning and Job Types: Full-time, Permanent Schedule: Day shift Experience: AI/ML : 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

2 - 7 Lacs

Chennai

On-site

Company Overview KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. The innovative ideas and devices that are advancing humanity all begin with inspiration, research and development. KLA focuses more than average on innovation and we invest 15% of sales back into R&D. Our expert teams of physicists, engineers, data scientists and problem-solvers work together with the world’s leading technology providers to accelerate the delivery of tomorrow’s electronic devices. Life here is exciting and our teams thrive on tackling really hard problems. There is never a dull moment with us. Group/Division The Information Technology (IT) group at KLA is involved in every aspect of the global business. IT’s mission is to enable business growth and productivity by connecting people, process, and technology. It focuses not only on enhancing the technology that enables our business to thrive but also on how employees use and are empowered by technology. This integrated approach to customer service, creativity and technological excellence enables employee productivity, business analytics, and process excellence. Job Description/Preferred Qualifications We are seeking for a Data Scientist/ML Engineer with experience in NLP, statistics, and Conversational AI. The ideal candidate is adept at using Statistical Modelling, Machine learning algorithms and Conversational AI. Strong experience using variety of data mining/ Machine Learning methods, using/coding algorithms and running simulations. The right candidate will have a passion for discovering solutions hidden in large data sets and working with business partners to improve business outcomes. Responsibilities for Data Scientist/ML Engineer: Building end-to-end ML systems (Including traditional ML models and Conversational AI) and deploying them to operate at scale. Research and devise innovative statistical models/ML models for data analysis Develop and maintain RAG (Retrieval-Augmented Generation) applications. Implement changes to algorithms to improve AI performance and retrieval techniques. Use predictive modeling to increase and optimize customer service experience and supply chain optimization. Identify patterns in data using statistical modeling, develop custom data models and algorithms to apply to data sets. Work with business partners throughout the organization to find opportunities for using company data to drive solutions. Working with project managers to establish objectives for AI systems. Minimum Qualifications Skills and Qualifications: Bachelor’s degree (or equivalent) in computer science, statistics, applied mathematics, or related discipline Experience solving problems with Conversational AI in a production environment for a High-tech Manufacturing Industry 4+ years of experience in Python and ML packages (NumPy, Keras, Seaborn, Scikit-learn). Excellent understanding of machine learning techniques and algorithms such as, Regression models, kNN, SVM,NB, Decision Trees. Expertise/Familiarity with one or more deep learning frameworks such as TensorFlow, PyTorch. We offer a competitive, family friendly total rewards package. We design our programs to reflect our commitment to an inclusive environment, while ensuring we provide benefits that meet the diverse needs of our employees. KLA is proud to be an equal opportunity employer Be aware of potentially fraudulent job postings or suspicious recruiting activity by persons that are currently posing as KLA employees. KLA never asks for any financial compensation to be considered for an interview, to become an employee, or for equipment. Further, KLA does not work with any recruiters or third parties who charge such fees either directly or on behalf of KLA . Please ensure that you have searched KLA’s Careers website for legitimate job postings. KLA follows a recruiting process that involves multiple interviews in person or on video conferencing with our hiring managers.  If you are concerned that a communication, an interview, an offer of employment, or that an employee is not legitimate, please send an email to talent.acquisition@kla.com to confirm the person you are communicating with is an employee. We take your privacy very seriously and confidentially handle your information.

Posted 2 weeks ago

Apply

2.0 years

2 - 7 Lacs

Chennai

On-site

Company Overview KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. The innovative ideas and devices that are advancing humanity all begin with inspiration, research and development. KLA focuses more than average on innovation and we invest 15% of sales back into R&D. Our expert teams of physicists, engineers, data scientists and problem-solvers work together with the world’s leading technology providers to accelerate the delivery of tomorrow’s electronic devices. Life here is exciting and our teams thrive on tackling really hard problems. There is never a dull moment with us. Group/Division The Information Technology (IT) group at KLA is involved in every aspect of the global business. IT’s mission is to enable business growth and productivity by connecting people, process, and technology. It focuses not only on enhancing the technology that enables our business to thrive but also on how employees use and are empowered by technology. This integrated approach to customer service, creativity and technological excellence enables employee productivity, business analytics, and process excellence. Job Description/Preferred Qualifications We are seeking for a Data Scientist/ML Engineer with experience in NLP, statistics, and Conversational AI. The ideal candidate is adept at using Statistical Modelling, Machine learning algorithms and Conversational AI. Strong experience using variety of data mining/ Machine Learning methods, using/coding algorithms and running simulations. The right candidate will have a passion for discovering solutions hidden in large data sets and working with business partners to improve business outcomes. Responsibilities for Data Scientist/ML Engineer: Building end-to-end ML systems (Including traditional ML models and Conversational AI) and deploying them to operate at scale. Research and devise innovative statistical models/ML models for data analysis Develop and maintain RAG (Retrieval-Augmented Generation) applications. Implement changes to algorithms to improve AI performance and retrieval techniques. Use predictive modeling to increase and optimize customer service experience and supply chain optimization. Identify patterns in data using statistical modeling, develop custom data models and algorithms to apply to data sets. Work with business partners throughout the organization to find opportunities for using company data to drive solutions. Working with project managers to establish objectives for AI systems. Minimum Qualifications Bachelor’s degree (or equivalent) in computer science, statistics, applied mathematics, or related discipline Experience solving problems with Conversational AI in a production environment for a High-tech Manufacturing Industry 2+ years of experience in Python and ML packages (NumPy, Keras, Seaborn, Scikit-learn). Excellent understanding of machine learning techniques and algorithms such as, Regression models, kNN, SVM,NB, Decision Trees. Expertise/Familiarity with one or more deep learning frameworks such as TensorFlow, PyTorch. We offer a competitive, family friendly total rewards package. We design our programs to reflect our commitment to an inclusive environment, while ensuring we provide benefits that meet the diverse needs of our employees. KLA is proud to be an equal opportunity employer Be aware of potentially fraudulent job postings or suspicious recruiting activity by persons that are currently posing as KLA employees. KLA never asks for any financial compensation to be considered for an interview, to become an employee, or for equipment. Further, KLA does not work with any recruiters or third parties who charge such fees either directly or on behalf of KLA . Please ensure that you have searched KLA’s Careers website for legitimate job postings. KLA follows a recruiting process that involves multiple interviews in person or on video conferencing with our hiring managers.  If you are concerned that a communication, an interview, an offer of employment, or that an employee is not legitimate, please send an email to talent.acquisition@kla.com to confirm the person you are communicating with is an employee. We take your privacy very seriously and confidentially handle your information.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies