Home
Jobs

2323 Numpy Jobs - Page 14

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description About Goldman Sachs At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. For us, it’s all about bringing together people who are curious, collaborative and have the drive to make things possible for our clients and communities. Global Banking & Markets Our core value is building strong relationships with our institutional clients, which include financial service providers, and fund managers. We help them buy and sell financial products exchanges around the world, raise funding, and manage risk. This is a dynamic, entrepreneurial with a passion for the markets, with individuals who thrive in fast-paced, changing environments and energized by a bustling trading floor. Business Unit / Role Overview Loan Management primarily supports the Global Banking & Markets Division at Goldman Sachs. This team is responsible for overseeing and facilitating due diligence at both the deal and asset levels for financing, asset management, and sales/securitizations of various loan portfolios across the US, EMEA, and APAC regions. The US/EMEA/APAC Mortgage desk is involved in market-making for loan businesses and Asset-Backed Securities, with a focus on financing and advisory-related workstreams. This role places Loan Management at the core of the evolving banking and capital markets landscape, designing strategies for Goldman Sachs to be a long-term participant in these new capital flows. Loan Management is currently seeking an Analyst to support the US and EMEA loans business. The successful candidate will be responsible for trade closing analytics, quantitative analysis related to pre-trade collateral analytics and valuation, and ongoing post-trade asset management of loan portfolios. Additionally, the role involves ensuring the integrity, definition, structure, and unity of purpose of data within Goldman Sachs' system of record. Job Responsibilities Analyze client portfolios, including Residential and Commercial Real estate Mortgages, and various Consumer Loans (e.g., Student, Credit Card, Auto, Personal). This involves data extraction, quality checks, portfolio segmentation based on key collateral characteristics, and data stratification. Learn and apply Discounted Cash Flow/ internal loan pricing models for potential financing or advisory purposes. Provide insightful commentary on collateral performance and key valuation drivers. Estimate credit losses using existing valuation models and rating agency models and summarize the output for Desks and Client presentations. Ensure the accuracy of data underlying all analytics, including timely verification of deal-related documents. Participate in transaction management activities for both new and existing positions. Interface with Loan Servicers, Controllers, Operations, Risk, and other relevant teams to ensure accurate data capture and flow to relevant systems. Coordinate with Technology and internal departments to develop new vendor data feeds and enhance internal databases. Manage the file load process for vendor data feeds. Manage and create data quality control checks for internal databases and resolve issues through analytic research. Automate repetitive tasks using industry-standard tools. Assist the mortgage and consumer desk in obtaining market securitization insights. Manage P&L aspects of book portfolios for multiple mortgage desks. Fulfill ad hoc requests from stakeholders. Provide portfolio analytics periodically or on-demand. Basic Qualifications Strong academic background with 2-3 years of related experience in finance, business, math, statistics, or accounting, and a minimum GPA equivalent of 3.0. Highly motivated self-starter with strong mathematical, logical reasoning, and analytical skills. Attention to detail and the ability to prioritize workload and manage expectations until project completion. Demonstrated ability to be a strong team player, collaborating effectively within and across teams. Excellent communication skills, capable of conveying technical concepts clearly and concisely, and managing internal and external relationships. Proactive thinker who anticipates questions, plans for contingencies, finds alternative solutions, identifies clear objectives, and makes defensible judgments regarding workflow. Ability to see the big picture and effectively analyze complex issues. Preferred experience in mortgage banking, fixed income products, bonds/loans, or other financial industry sectors. US/EMEA/APAC experience is an added advantage. Up-to-date with emerging business, economic, and market trends. Proficient in Excel and SQL. Knowledge of coding languages such as Python (pandas/NumPy). Understanding of database objects and data structures. Experience working with large data sets is a plus. Experience in using tools like CAS, Tableau, PowerBI, Alteryx will be good Ability to work under tight time constraints and extended hours as required. Strong project management and organizational skills. Candidates with certifications like CFA or FRM are preferred. About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Show more Show less

Posted 5 days ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Salary range: Rs 450000 - Rs 550000 (ie INR 4.5-5.5 LPA) Min Experience: 1 years JobType: full-time We are looking for a research-oriented Environmental Data Scientist to spearhead the development of advanced algorithms that improve the accuracy, reliability, and overall performance of air quality sensor data. This role is focused on addressing real-world challenges in environmental sensing, such as sensor drift, cross-interference, and anomalous data behavior — going beyond conventional data science applications. Requirements Key Responsibilities: Develop and implement algorithms for sensor calibration, signal correction, anomaly detection, and cross-interference mitigation to improve air quality data accuracy and stability. Conduct research on sensor behavior and environmental impacts to guide algorithm design. Collaborate with software and embedded systems teams to integrate algorithms into cloud and edge environments. Analyze large-scale environmental datasets using Python, R, or similar data analysis tools. Validate and refine algorithms using both laboratory and field data through iterative testing. Create visualization tools and dashboards to interpret sensor behavior and assess algorithm effectiveness. Support environmental research initiatives with data-driven statistical analysis. Document methodologies, test results, and findings for internal knowledge sharing and system improvement. Contribute to team efforts by writing clean, efficient code and assisting in overcoming programming challenges. Required Skills & Qualifications: Bachelor's or Master's degree in Environmental Engineering, Environmental Science, Chemical Engineering, Electronics/Instrumentation Engineering, Computer Science, Data Science, Physics, or Atmospheric Science with a focus on data/sensing. 1-2 years of experience working with sensor data or IoT-based environmental monitoring systems. Strong background in algorithm development, signal processing, and statistical data analysis. Proficiency in Python (e.g., pandas, NumPy, scikit-learn) or R, with experience managing real-world sensor datasets. Ability to design and deploy models in cloud-based or embedded environments. Strong analytical thinking, problem-solving skills, and effective communication abilities. Genuine interest in environmental sustainability and clean technologies. Preferred Qualifications: Familiarity with time-series anomaly detection, signal noise reduction, sensor fusion, or geospatial data processing. Exposure to air quality sensor technology, environmental sensor datasets, or dispersion modeling Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Greater Delhi Area

Remote

Linkedin logo

ABOUT THE PYTHON DATA ENGINEER ROLE: We are looking for a skilled Python Data Engineer to join our team and work on building high-performance applications and scalable data solutions. In this role, you will be responsible for designing, developing, and maintaining robust Python-based applications, optimizing data pipelines, and integrating various APIs and databases. This is more than just a coding role—it requires strategic thinking, creativity, and a passion for data-driven decision-making to drive results and innovation. KEY RESPONSIBILITIES: Develop, test, and maintain efficient Python applications. Design, develop, and maintain ETL pipelines for efficient data extraction, transformation, and loading. Implement and integrate APIs, web scraping techniques, and database queries to extract data from various sources. Design and implement algorithms for data processing, transformation, and analysis. Write optimized SQL queries and work with relational databases to manage and analyse large datasets. Collaborate with cross-functional teams to understand technical requirements and deliver high-quality solutions. Ensure code quality, performance, and scalability through best practices and code reviews. Stay updated with the latest advancements in Python, data engineering, and backend development. REQUIRED QUALIFICATIONS: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3–5+ years of hands-on experience as Data Engineer using Python Proficiency in Python frameworks and libraries such as Pandas, NumPy, and Scrapy. Experience with Data Visualization tools such as Power BI, Tableau Strong understanding of relational databases and SQL. Experience working with cloud platforms such as AWS Strong problem-solving skills with an analytical mindset. Excellent communication skills and the ability to work in a collaborative team environment. WHY JOIN US? Highly inclusive and collaborative culture built on mutual respect. Focus on core values, initiative, leadership, and adaptability. Strong emphasis on personal and professional development. Flexibility to work remotely and/or hybrid indefinitely. ABOUT WIN: Founded in 1993, WIN is a highly innovative proptech company revolutionizing the real estate industry with cutting-edge software platforms and products. With the stability and reputation of a 30-year legacy paired with the curiosity and agility of a start-up, we’ve been recognized as an Entrepreneur 500 company, one of the Fastest Growing Companies, and the Most Innovative Home Services Company. OUR CULTURE: Our colleagues are driven by curiosity and tinkering and a desire to make an impact. They enjoy a culture of high energy and collaboration where we listen to each other with empathy, experience personal and professional growth, and celebrate small victories and big accomplishments. Click here to learn more about our company and culture: https://www.linkedin.com/company/winhomeinspection/life Show more Show less

Posted 5 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Purpose ICE Data Services India Private Limited., a subsidiary of Intercontinental Exchange, Inc. is seeking a passionate Senior QA Engineer to join our Quality Assurance team in Hyderabad, India. The candidate will work closely internally with Business Analysts, End-Users and Developers to facilitate and understand requirements and impact of changes to assist debugging and enhancing ICE Data Service applications. Responsibilities Review functional requirements to assess their impact on the software applications and formulate tests cases from them. Write concise, complete, well organized bug reports, test cases, and status reports. Analyze product requirements and ensure the testing is aligned with a risk-based test approach, mitigating risk exposure within all phases of testing. Participate in analyzing root causes of problems found and assist developers with countermeasures to remove the causes Evaluate and recommend enhancements for the product under test Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities. Demonstrate exceptional interpersonal and communication skills and confidence to work with senior stakeholders from development and product Evaluate the effectiveness and efficiency of QA methods and procedures used and undertake improvement projects to improve QA effectiveness and efficiency Provide release support during production software deployment. A "can do" attitude and enjoys working within a highly collaborative work environment. Knowledge And Experience At least 7+ years of experience in the field of Software Quality Assurance Good understanding of Quality Assurance concepts, practices and tools Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Attention to detail and ability to work on multiple projects at the same time Highly motivated team player, with very strong analytical, detail-oriented, organized, diagnostic, and debugging skills Excellent interpersonal, verbal and written skills. Self-starter, energetic, ability to prioritize workload and work with minimal supervision Experience with mainstream defect tracking tools and test management tools Desired Knowledge And Experience Experience in the Financial Industry (experience with Fixed Income products is preferred) Experience with UNIX / LINUX systems Performance testing using JMETER or similar tool Experience with code version systems like Git List Of Preferred Degree(s), License(s), And/or Certification(s) B.S./B.Tech in Computer Science, Electrical Engineering, Math or equivalent Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 5 days ago

Apply

5.0 years

0 - 0 Lacs

Puducherry

On-site

Title: Python Developer Experience: 5-6 Years Location: Puducherry (On-site) Job Summary: We’re hiring a skilled Python Developer to design, build, and optimize scalable applications. You’ll collaborate with cross-functional teams to deliver high-performance solutions while adhering to best practices in coding, testing, and deployment. Key Responsibilities: ✔ Backend Development:Design and implement robust APIs using Django/Flask/FastAPI.Integrate with databases (PostgreSQL, MySQL, MongoDB).Optimize applications for speed and scalability. ✔ Cloud & DevOps:Deploy apps on AWS/Azure/GCP (Lambda, EC2, S3).Use Docker/Kubernetes for containerization.Implement CI/CD pipelines (GitHub Actions/Jenkins). ✔ Data & Automation:Develop ETL pipelines with Pandas, NumPy, Apache Airflow.Automate tasks using Python scripts. ✔ Collaboration:Work with frontend teams (React/JS) for full-stack integration.Mentor junior developers and conduct code reviews.Skills RequiredMust-Have:5+ years of Python development (OOP, async programming). Frameworks: Django/Flask/FastAPI.Databases: SQL/NoSQL (e.g., PostgreSQL, MongoDB).APIs: RESTful/gRPC, authentication (OAuth, JWT).DevOps: Docker, Git, CI/CD, AWS/Azure basics. Good-to-Have: Frontend basics: HTML/CSS, JavaScript. Perks & Benefits: Competitive salary How to Apply: Send your resume and GitHub/portfolio links to hr@cloudbeestech.com with the subject: "Python Developer Application ". Job Type: Full-time Pay: ₹40,000.00 - ₹50,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person Application Deadline: 15/06/2025

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale Qualifications Education : Bachelor’s in Engineering or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 5 days ago

Apply

7.0 years

10 - 11 Lacs

Hyderābād

On-site

Hyderabad, India Technology In-Office 10530 Job Description Job Purpose ICE Data Services India Private Limited., a subsidiary of Intercontinental Exchange, Inc. is seeking a passionate Senior QA Engineer to join our Quality Assurance team in Hyderabad, India. The candidate will work closely internally with Business Analysts, End-Users and Developers to facilitate and understand requirements and impact of changes to assist debugging and enhancing ICE Data Service applications. Responsibilities Review functional requirements to assess their impact on the software applications and formulate tests cases from them. Write concise, complete, well organized bug reports, test cases, and status reports. Analyze product requirements and ensure the testing is aligned with a risk-based test approach, mitigating risk exposure within all phases of testing. Participate in analyzing root causes of problems found and assist developers with countermeasures to remove the causes Evaluate and recommend enhancements for the product under test Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities. Demonstrate exceptional interpersonal and communication skills and confidence to work with senior stakeholders from development and product Evaluate the effectiveness and efficiency of QA methods and procedures used and undertake improvement projects to improve QA effectiveness and efficiency Provide release support during production software deployment. A "can do" attitude and enjoys working within a highly collaborative work environment. Knowledge and Experience At least 7+ years of experience in the field of Software Quality Assurance Good understanding of Quality Assurance concepts, practices and tools Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Attention to detail and ability to work on multiple projects at the same time Highly motivated team player, with very strong analytical, detail-oriented, organized, diagnostic, and debugging skills Excellent interpersonal, verbal and written skills. Self-starter, energetic, ability to prioritize workload and work with minimal supervision Experience with mainstream defect tracking tools and test management tools Desired Knowledge and Experience Experience in the Financial Industry (experience with Fixed Income products is preferred) Experience with UNIX / LINUX systems Performance testing using JMETER or similar tool Experience with code version systems like Git List of preferred degree(s), license(s), and/or certification(s) B.S./B.Tech in Computer Science, Electrical Engineering, Math or equivalent

Posted 5 days ago

Apply

2.0 years

5 - 8 Lacs

Hyderābād

On-site

Category: Software Development/ Engineering Main location: India, Andhra Pradesh, Hyderabad Position ID: J0525-1947 Employment Type: Full Time Position Description: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Python with SQL Position: Software Engineer Experience: 2-3Years Category: Software Development/ Engineering Location: Hyderabad , Chennai , Bangalore Employment Type: Full Time Your future duties and responsibilities: We are seeking a highly skilled and detail-oriented Python and SQL Developer to join our team. The ideal candidate will be responsible for developing and maintaining data-driven applications, building efficient and scalable solutions, and working with databases to extract and manipulate data for analysis and reporting. Design, develop, and maintain scalable Python applications and microservices. Write complex and optimized SQL queries for data extraction, transformation, and loading (ETL). Develop and automate data pipelines integrating various data sources (REST APIs, files, databases). Work with large datasets in relational databases such as PostgreSQL, MySQL, or SQL Server. Collaborate with data engineers, analysts, and product teams to build high-quality data solutions. Implement unit testing, logging, and error handling to ensure software reliability. Optimize database performance and troubleshoot query issues. Participate in architecture discussions and code reviews 2+ years of professional experience with Python and SQL in production environments. Deep understanding of Python core concepts including data structures, OOP, exception handling, and multi-threading. Experience with SQL query optimization, stored procedures, indexing, and partitioning. Strong experience with Python libraries such as Pandas, NumPy, SQLAlchemy, PySpark, or similar. Familiarity with ETL pipelines, data validation, and data integration. Experience with Git, CI/CD tools, and development best practices. Excellent problem-solving skills and ability to debug complex systems. Required qualifications to be successful in this role: Experience with cloud platforms (AWS RDS, GCP BigQuery, Azure SQL, etc.). Exposure to Docker, Kubernetes, or serverless architectures. Understanding of data warehousing and business intelligence concepts. Prior experience working in Agile/Scrum environments. Years of experience : 2+ Relevant experience : 2+ Locations : Hyderabad ,Bangalore , Chennai. Eductaion : BTech ,MTech ,BSC Notice : Immediate to 30days - Serving Skills: Python SQLite What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 5 days ago

Apply

5.0 years

0 - 0 Lacs

Hyderābād

On-site

Job Title: Python Developer Location: Hyderabad / Pune Job Type: Contractual – Full Time Start Date: June 17, 2025 Duration: 3 months Experience Required: 5–8 years Salary Range: ₹65,000 – ₹75,000 per month Mandatory Skills: Python (5–8 years) Microservices architecture Flask and FastAPI PostgreSQL and SQLAlchemy Role Overview: We’re seeking an experienced Python Developer to join a high-impact project focused on scalable microservices and robust API development. If you’re passionate about backend systems, performance optimization, and writing clean, testable code—this role is for you. Key Responsibilities: Develop, deploy, and maintain APIs using Flask and FastAPI Work with microservices architecture and integrate databases (PostgreSQL, MongoDB) Write efficient and scalable code with multithreading/multiprocessing capabilities Build API automation scripts and reusable backend modules Use tools like Pandas, NumPy, and OpenCV for data manipulation and visualization Create RESTful services and consume third-party APIs Work within Agile methodologies and CI/CD pipelines Debug, test, and optimize applications using Python testing frameworks Implement performance tuning and automation for backend processes Collaborate with cross-functional teams on high-performance solutions Preferred Exposure: Data visualization (matplotlib, reportLab) Experience in digital tools for online traffic monitoring Strong scripting and backend automation background Note: Candidates must be ready to start by June 17, 2025. Apply now if you’re ready to bring your Python expertise to a challenging and rewarding project. Job Types: Full-time, Permanent Pay: ₹65,000.00 - ₹75,000.00 per month Benefits: Health insurance Internet reimbursement Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 8595325853

Posted 5 days ago

Apply

2.0 - 4.0 years

4 - 16 Lacs

Hyderābād

On-site

Job Title: AI/ML Trainer (Contractual – 4 to 5 Months) Location: Hyderabad (Onsite Preferred) Experience Required: 2 to 4 years Contract Duration: 4 to 5 Months (Extendable based on performance and requirement) Job Type: Contractual / Freelance Trainer About the Role: We are seeking an experienced and passionate AI/ML Trainer to deliver hands-on training for a structured Artificial Intelligence & Machine Learning program. This is a contract-based opportunity ideal for professionals with strong technical expertise and a flair for teaching. Key Responsibilities: Deliver in-depth classroom and lab-based training sessions on core AI/ML topics. Design, develop, and maintain training material, assignments, and project modules. Conduct doubt-clearing sessions, assessments, and live coding exercises. Guide students through capstone projects and real-world datasets. Evaluate learner performance and provide constructive feedback. Stay updated with the latest industry trends, tools, and techniques in AI/ML. Topics to Cover: Python for Data Science Statistics and Linear Algebra Basics Machine Learning (Supervised, Unsupervised, Ensemble Techniques) Deep Learning (ANN, CNN, RNN, etc.) Natural Language Processing (NLP) Model Deployment (Flask, Streamlit, etc.) Hands-on with libraries: NumPy, Pandas, Scikit-learn, TensorFlow/Keras, OpenCV Tools: Jupyter, Git, Google Colab, VS Code Eligibility Criteria: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 2–4 years of relevant industry/training experience in AI/ML. Strong communication and presentation skills. Ability to mentor and engage learners of varying skill levels. Prior training or teaching experience (academic or corporate) preferred. Job Type: Contractual / Temporary Contract length: 4 months Pay: ₹466,229.74 - ₹1,666,220.30 per year Benefits: Food provided Schedule: Day shift Morning shift Work Location: In person

Posted 5 days ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

Job Title: Faculty Member - Python, Data Science, and Artificial IntelligenceDepartment : Computer Science / Data Science / Artificial Intelligence Location :Kochi Position Type : Full-time, Position Overview We are seeking a highly qualified and motivated individual for a faculty position in Python programming, Data Science, and Artificial Intelligence. The successful candidate will contribute to the academic growth of our students, engage in cutting-edge research in AI, machine learning, and data science, and participate in collaborative projects across various disciplines. This position offers an exciting opportunity to shape the future of technology education while contributing to groundbreaking research in the fields of Data Science and Artificial Intelligence. Key Responsibilities Teaching & Curriculum Development Develop and teach undergraduate and graduate-level courses in Python programming, Data Science, and AI. Design and update course materials, including syllabi, lectures, assignments, and assessments. Provide hands-on learning opportunities and practical application of Python, machine learning, and AI techniques. Supervise and mentor students in their academic projects and research activities. should be willing to take over time and should have good English knowledge. Advising & Mentorship Advise graduate and undergraduate students on research topics, theses, and projects. Provide mentorship in career development, guiding students into relevant industry or academic roles. Professional Development & Service Stay current with advancements in Data Science, AI, and Python development. Participate in departmental meetings, committees, and faculty activities. Contribute to university-wide initiatives and outreach programs. Attend and present at conferences, workshops, and seminars. Qualifications: BE OR ME . in Computer Science, Data Science, Artificial Intelligence, or a closely related field . Strong proficiency in Python programming and experience with Python libraries for Data Science (e.g., Pandas, NumPy, SciPy, Matplotlib). Expertise in Machine Learning , Deep Learning , Natural Language Processing (NLP) , or Computer Vision . Demonstrated ability to teach at the undergraduate and graduate level in Data Science and AI. A proven track record of research in AI, Data Science, or related fields. Strong communication skills and the ability to engage students in research and problem-solving. Preferred: Experience with version control systems (e.g., Git), data visualization tools, and databases. Prior experience in curriculum development and teaching online courses. Industry experience in Data Science, AI, or machine learning applications add on advantage. Application Instructions Interested candidates should submit the following: A cover letter that outlines their teaching philosophy, research interests, and career goals. A curriculum vitae (CV) with a list of publications and references. A statement of research interests, including potential projects and funding opportunities. Three professional references, including one who can speak to the applicant’s teaching capabilities. or share resume to contact :7994211184. Job Type: Full-time Pay: ₹10,000.00 - ₹25,000.00 per month Schedule: Day shift Morning shift Supplemental Pay: Overtime pay Performance bonus Language: English (Preferred) Work Location: In person

Posted 5 days ago

Apply

3.0 - 8.0 years

15 - 17 Lacs

Bangalore Rural

Work from Office

Naukri logo

AI Developer Req number: R5051 Employment type: Full time Worksite flexibility: Hybrid Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We’re seeking a problem-solving AI Developer with 3–5 years of hands-on experience in building and deploying deep learning models, including RAG systems and LLM fine-tuning. You’ll work on cutting-edge projects, contribute to open-source initiatives, and solve complex AI challenges. This is a Full-time and Hybrid position. Job Description What You’ll Do Design and implement scalable deep learning models, including RAG architectures and LLM-based solutions. Fine-tune and optimize LLMs for specific use cases (e.g., text generation, summarization). Collaborate on open-source projects and research initiatives. Optimize model performance for latency, accuracy, and scalability. What You'll Need 3–5 years of professional experience in AI/ML development. Expertise in Python and frameworks like TensorFlow/PyTorch. Proven experience with neural networks (CNNs, RNNs, Transformers) and building RAG systems. Hands-on experience in fine-tuning LLMs (e.g., GPT, Llama, Mistral) for domain-specific tasks. Strong portfolio of open-source contributions (GitHub, Kaggle, etc.). Ability to debug, optimize, and deploy models in production environments. Critical thinker with a track record of research-driven problem-solving. Preferred Experience with diffusion models or reinforcement learning. Publications in top AI conferences (NeurIPS, ICML, CVPR). Familiarity with MLOps tools (MLflow, Kubeflow) and vector databases (Pinecone, FAISS). Knowledge of cloud platforms (AWS/GCP/Azure). Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You will be part of the FCSD analytics team, playing a critical role in leveraging data science to drive significant business impact within Ford Customer Service Division. As a Data Scientist, you will translate complex business challenges into data-driven solutions. This involves partnering closely with stakeholders to understand problems, working with diverse data sources (including within GCP), developing and deploying scalable AI/ML models, and communicating actionable insights that deliver measurable results for Ford. Responsibilities Job Responsibilities: Build an in-depth understanding of the business domain and data sources, demonstrating strong business acumen. Extract, analyze, and transform data using SQL for insights. Apply statistical methods and develop ML models to solve business problems. Design and implement analytical solutions, contributing to their deployment, ideally leveraging Cloud environments. Work closely and collaboratively with Product Owners, Product Managers, Software Engineers, and Data Engineers within an agile development environment. Integrate and operationalize ML models for real-world impact. Monitor the performance and impact of deployed models, iterating as needed. Present findings and recommendations effectively to both technical and non-technical audiences to inform and drive business decisions. Qualifications Qualifications: At least 3 years of relevant professional experience applying data science techniques to solve business problems. This includes demonstrated hands-on proficiency with SQL and Python. Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Computer Science, Mathematics, Engineering, Economics). Hands-on experience in conducting statistical data analysis (EDA, forecasting, clustering, hypothesis testing, etc.) and applying machine learning techniques (Classification/Regression, NLP, time-series analysis, etc.). Technical Skills: Proficiency in SQL, including the ability to write and optimize queries for data extraction and analysis. Proficiency in Python for data manipulation (Pandas, NumPy), statistical analysis, and implementing Machine Learning models (Scikit-learn, TensorFlow, PyTorch, etc.). Working knowledge in a Cloud environment (GCP, AWS, or Azure) is preferred for developing and deploying models. Experience with version control systems, particularly Git. Nice to have: Exposure to Generative AI / Large Language Models (LLMs). Functional Skills: Proven ability to understand and formulate business problem statements. Ability to translate Business Problem statements into data science problems. Strong problem-solving ability, with the capacity to analyze complex issues and develop effective solutions. Excellent verbal and written communication skills, with a demonstrated ability to translate complex technical information and results into simple, understandable language for non-technical audiences. Strong business engagement skills, including the ability to build relationships, collaborate effectively with stakeholders, and contribute to data-driven decision-making. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization.The candidate should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making by FCSD business team Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Master's degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. Proficient in querying and analyzing large datasets using BigQuery on GCP. Strong Python skills for data wrangling and automation. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Show more Show less

Posted 5 days ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

India

On-site

Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 01/07/2025

Posted 5 days ago

Apply

1.0 years

4 - 7 Lacs

Ahmedabad

On-site

Experience: 1+ Year Location: Ahmedabad Minimum Required Skills as below 1+ years of experience in Machine Learning and Data Science projects. Hands-on experience with Python and ML libraries (e.g., scikit-learn, pandas, NumPy). Understanding of supervised and unsupervised learning techniques. Experience with model development, evaluation, and deployment. Ability to work independently.

Posted 5 days ago

Apply

2.0 years

0 Lacs

India

On-site

Job Title: Data Analyst Location: Surat (On-site) Experience Required: Minimum 2 Years Job Type: Full-Time Department: Analytics/Data Science Reporting To: Data Lead / Business Head About the Role: We are looking for a results-driven and detail-oriented Data Analyst to join our team in Surat. The ideal candidate will have hands-on experience in analyzing complex datasets, creating insightful dashboards, and driving business decisions through data. Key Responsibilities: Gather, clean, and analyze structured and unstructured data from various sources. Build and maintain interactive dashboards and reports using Power BI and Looker Studio . Write and optimize SQL queries to extract meaningful insights from databases. Use Python for data manipulation, statistical analysis, and automation tasks. Collaborate with cross-functional teams to understand data needs and deliver solutions. Identify trends, patterns, and key insights that impact business performance. Ensure data accuracy, integrity, and security across all reports and dashboards. Present findings to stakeholders with clear storytelling and visualizations. Key Skills & Tools: Strong proficiency in Power BI and Looker Studio (Google Data Studio) Advanced knowledge of SQL (Joins, subqueries, window functions, etc.) Working knowledge of Python for data analysis (Pandas, NumPy, etc.) Understanding of data warehousing concepts and relational databases Experience with data cleaning, transformation, and visualization Strong analytical thinking and attention to detail Qualifications: Bachelor’s degree in Computer Science, Statistics, Mathematics, Economics, or a related field Minimum 2 years of relevant experience in a data analyst or business analyst role Good communication and presentation skills Job Type: Full-time Pay: Up to ₹30,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Education: Bachelor's (Required) Experience: Power BI: 1 year (Required) Python: 1 year (Required) Language: English (Preferred) Willingness to travel: 75% (Required) Work Location: In person

Posted 5 days ago

Apply

0 years

10 - 15 Lacs

Noida

On-site

Customer Success at Innovaccer Our mission is to turn our customers into tech-savvy superheroes, ensuring they achieve success using our platform to meet their organization’s business goals. If you're passionate about helping customers realize the value they seek with technology, then our customer success team is the right place for you. A Day in the Life Implement, develop, automate, and unit-test business processes between various data repositories and systems. Work with the Delivery team to help them integrate the Python code into their workflows and automate the entire data journey. Implement Python libraries to automate the data ingestion lifecycle and improve code reusability Troubleshoot data issues and perform root cause analysis to resolve operational issues proactively Establish best practices around software development in the development team. Developing programs to consume externally hosted open APIs Analyze and improve the performance, scalability, stability, and security of the code. Improve engineering standards, tooling, and processes Participate in the full SDLC process using Agile methodology, including discovery, inception, story and task creation, breakdown and estimation, iterative planning, development and unit testing, and release/deployment. Support production environments with any bugs or execution failures Work with business leaders and customers to understand their pain points and build large-scale solutions for them Managing and coordinating with multiple teams across Innovaccer to deliver solutions What You Need Hands-on experience in Python (OOPs development), SQL(JOINs, Aggregation, Filtration, Subquery, Grouping, Windows Function, Common Table Expressions) and Linux. Working knowledge of Python libraries like Pandas, NumPy, Selenium, ElasticSearch, psycopg2, Snowflake, Boto3, requests, urllib3, sqlalchemy and pymongo. Experience in scaling Python codes for multiple integration touch points and consuming open APIs using Python. Experience in RDBMS & NoSQL databases Experience with Git Good experience in current transformation technologies such as XML, JSON, CSV and SQL Strong knowledge of agile methodologies Basic Analytics Tools (Power BI or Tableau) knowledge An ambitious person who can work in a flexible startup environment with only one thing in mind - getting things done. Excellent written and verbal communication skills. Here’s What We Offer Generous Leave Benefits : Enjoy generous leave benefits of up to 40 days. Parental Leave : Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy : Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance : We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.

Posted 5 days ago

Apply

6.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a highly skilled and experienced Python Developer with 6 to 9 years of strong software development experience. The ideal candidate will have deep expertise in Flask APIs, Microservices Architecture, MongoDB, Data Analysis Libraries , and be proficient in message queue systems and Google Cloud Platform (GCP) services. Key Responsibilities Design, develop, and maintain scalable Flask-based REST APIs and Microservices . Integrate and manage MongoDB for high-performance data operations. Develop and maintain efficient data processing using NumPy and Pandas . Implement asynchronous communication using messaging queues such as ActiveMQ , RabbitMQ , or Kafka . Write unit and integration tests using pytest and ensure high code quality and test coverage. Design and implement solutions leveraging GCP services like Cloud Functions, Pub/Sub, Cloud Storage, etc. Collaborate with cross-functional teams including DevOps, QA, and Product Managers. Participate in code reviews, architectural discussions, and process improvements. Required Skills 6–9 years of hands-on experience in Python development . Strong experience with Flask and building RESTful APIs . In-depth understanding of Microservices architecture . Proficiency in MongoDB (schema design, optimization, etc.). Good command over NumPy , Pandas , and data manipulation. Experience with messaging queues : RabbitMQ, ActiveMQ, or Kafka. Strong testing mindset using pytest or similar frameworks. Solid understanding and hands-on experience with GCP services . Familiarity with containerization (Docker) and CI/CD pipelines is a plus. NP -Max 20 days Location -Bangalore and Hyderabad Show more Show less

Posted 5 days ago

Apply

1.0 years

0 - 0 Lacs

Indore

On-site

Call: 7972240453 (Mon to Sat / 11 am - 6 pm) Company Name: Greamio Technologies Private Limited Job Title: Data Science Trainer Location: Indore, Madhya Pradesh Salary: ₹28,000 - ₹32,000 Employment Type: Full-Time Job Description: We are looking for a highly motivated and skilled Data Science Trainer to join our team. The ideal candidate will have a passion for teaching and a deep understanding of data science concepts, tools, and methodologies. The trainer will be responsible for delivering high-quality training to students, helping them build a solid foundation in data science, and guiding them through practical projects. Key Responsibilities: Deliver engaging and interactive training sessions on various data science topics, including statistics, machine learning, data visualization, and programming ( Python ). Design and develop course materials, assignments, quizzes, and hands-on projects. Mentor and guide students through practical exercises and real-world applications. Provide timely feedback and support to students on their progress and performance. Stay up-to-date with the latest trends and advancements in data science and incorporate them into the curriculum. Conduct assessments and evaluations to measure the effectiveness of the training program. Adapt teaching methods and materials to meet the diverse needs of learners in both online and classroom settings. Requirements: Bachelor's or Master's degree in Computer Science, Statistics, or a related field. Proven experience as a Data Science Trainer or similar role. Proficiency in programming languages such as Python, C, and SQL. Strong knowledge of data science tools and platforms (e.g., Jupyter, TensorFlow, Pandas, NumPy, Scikit-learn). Excellent communication and presentation skills. Ability to explain complex concepts in a simple and clear manner. Experience in mentoring or teaching is preferred. Certifications in data science or related fields are a plus. Preferred Skills: Hands-on experience with data analytics, machine learning models, and big data tools. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for data science projects. Ability to design project-based learning experiences for students. How to Apply: Interested candidates can share their CVs and the portfolios on hr@greamio.com with the subject line - "Application for Data Science Trainer (Indore) - Your Name" . Job Types: Full-time, Permanent, Fresher Pay: ₹28,000.00 - ₹32,000.00 per month Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Experience: Data science: 1 year (Required) Teaching: 1 year (Required) Language: Hindi (Required) English (Required) Work Location: In person

Posted 5 days ago

Apply

0.0 years

0 Lacs

India

On-site

Linkedin logo

About The Job Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0- 2 years Requirements Java: Design, develop and maintain both new and existing code, ranging from client-side development (using Angular, JavaScript, HTML and CSS) to server-side (using Java and Spring Boot, and T-SQL for data persistence and retrieval) Write readable, extensible, testable code while being mindful of performance requirements Create, maintain, and run unit tests for both new and existing code to deliver defect-free and well-tested code to QA. Conduct design and code reviews and collaborate to ensure your own code passes review Leverage our Cloud infrastructure (AWS) to engineer solutions that make the best of it Adhere to best practice development standards Stay abreast of developments in web applications and programming languages Requirements Strong Core Java 6+/ Java EE hands-on skills Use of any of the following IDEs - PyCharm for Python, Eclipse or IntelliJ for Java, VSCode for HTML/CSS/Javascript. Strong knowledge of OOP principles, including design patterns Good understanding of a relational database engine such as SQL Server Experience with writing SQL queries on databases like SQL Server Strong fundamentals in algorithms and data structures Experience with modern software development life-cycle Eager to learn, work and deliver independently Speak and write fluently in English Python Should be proficient in the following Standard library and OOP in python Python dependency management through pip Sphinx documentation engine Setup tools Pandas and Numpy Flask framework Jinja templating engine Celery Any production-ready WSGI server such as Gunicorn or uWSGI Other Personal Characteristics Dynamic, engaging, self-reliant developer Ability to deal with ambiguity Manage a collaborative and analytical approach Self-confident and humble Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people Be equally comfortable and capable of interacting with technologists as they are with business executives. Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

Job Title: Lead Data Scientist – NLP & Computer Vision Location: India (Remote, with monthly travel to Indore and occasional travel to the UK) Employment Type: Permanent, Full-time Required Skills & Experience: 6+ years of hands-on experience in data science, with at least 4 years focused on NLP and computer vision. About the Role: We’re seeking an ambitious and highly capable Lead Data Scientist with deep expertise in Natural Language Processing (NLP) and Computer Vision (OpenCV) to take a leading role in our AI journey. This is a hands-on technical role with strategic scope — ideal for someone who thrives in building robust solutions, mentoring others, and wants to grow into a Head of Data Science position. You’ll be responsible for delivering high-impact AI projects across diverse data domains (text, image, and structured data), while also setting technical direction, guiding junior colleagues, and helping shape our data science capability across the organisation. Excellent communication — both written and spoken — will be critical, as you’ll work closely with technical and non-technical stakeholders in India and internationally. Key Responsibilities: Design, build, and deploy production-ready NLP and computer vision models to solve real-world problems. Lead by example with hands-on development, research, and experimentation using cutting-edge ML/DL techniques. Guide and mentor a small team of junior/mid-level data scientists and actively contribute to their growth. Collaborate with cross-functional teams (Engineering, Product, Business) across geographies. Apply advanced NLP techniques (e.g. classification, summarisation, entity extraction, semantic search) using tools like spaCy, HuggingFace Transformers, BERT, GPT, etc. Solve computer vision challenges (e.g. object detection, image classification, feature extraction) using OpenCV, PyTorch, TensorFlow, etc. Drive quality and performance tuning, experimentation frameworks, and model lifecycle practices. Contribute to defining a strategic roadmap for data science and AI within the business. Work with cloud platforms (Azure, AWS) for model training, deployment, and monitoring. Occasionally travel to Indore (monthly) and to the UK (as needed) for collaborative sessions and leadership meetings. Required Skills & Experience: 6+ years of hands-on experience in data science, with at least 4 years focused on NLP and computer vision. Strong Python skills, including use of Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch. Experience deploying ML models in cloud environments (Azure, AWS, or similar). Solid grounding in machine learning fundamentals and deep learning architectures. Proven ability to solve end-to-end ML problems — from exploration to deployment and monitoring. Experience mentoring junior team members or leading technical efforts. Excellent spoken and written English communication skills – able to clearly convey complex ideas to both technical and business audiences. Collaborative mindset, with the ability to engage confidently across distributed teams and international stakeholders. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less

Posted 5 days ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies