Jobs
Interviews

4556 Numpy Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

7 - 14 Lacs

Hyderabad

Work from Office

Role & responsibilities - Strong knowledge in OOPs and creating custom python packages for serverless applications. Strong Knowledge in SQL querying. Hands on experience in AWS services like Lambda, EC2, EMR, S3 and Athena, Batch, Textract and Comprehend. Strong Expertise in extracting text, tables, logos from low quality scanned multipage pdfs (80 - 150 dpi) and Images. Good Understanding in Probability and Statistics concepts and ability to find hidden patterns, relevant insights from the data. Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Preferred candidate profile - 3+ years Hands on in Python and data science like tools pandas, NumPy, SciPy, matplotlib and strong exposure in regular expressions. 3+ years Hands on experience in Machine learning algorithms like SVM, CART, Bagging and Boosting algorithms, NLP based ML algorithms and Text mining. Hands on expertise to integrate multiple data sources in a streamlined pipeline.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Noida

Work from Office

Technical Expertise: Must Have Proficient in Java (must have ) . Hands-on experience with AWS AI services such as AWS Bedrock and Sagemaker. Good to have Python(good to have) programming with experience in libraries like TensorFlow, PyTorch, NumPy, and Pandas Experience with DevOps practices, including CI/CD pipelines, system monitoring, and troubleshooting in a production environment. Familiarity with other platforms like Google Gemini, and Copilot technologies. Soft Skills: Excellent communication and collaboration skills, with the ability to work effectively with stakeholders across business and technical teams. Strong problem-solving and analytical skills. Ability to work with teams in a dynamic, fast-paced environment. Key Responsibilities: Design, develop, and implement AI and machine learning models using AWS AI services like AWS bedrock and Sagemaker. Fine-tune and optimize AI models for business use cases. Implement Generative AI solutions using AWS Bedrock and Java. Write efficient, clean, and maintainable Java/Python code for AI applications. Develop and deploy RESTful APIs using frameworks like Flask or Django for model integration and consumption. Experience: 5 to 8 years of experience in software development, with 3+ years in AI/ML or Generative AI projects. Demonstrated experience in deploying and managing AI applications in production environments. Mandatory Competencies Programming Language - Java - Core Java (java 8+) Data Science and Machine Learning - Data Science and Machine Learning - Gen AI Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Middleware - API Middleware - Microservices

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Mahesana, Gujarat, India

On-site

Location: Ahmedabad, Gujarat, India (Work From Office ONLY) Experience: 2 - 4 years Salary: ₹35,000 - ₹40,000 per month + Performance Incentives About the Role: As a key member of our US Client/Student Development team, you'll be instrumental in empowering the next generation of data science professionals. Your primary focus will be on: Content Creation: Designing and developing comprehensive and engaging training materials, modules, and exercises covering various aspects of data science. Live Session Delivery: Conducting interactive live online sessions, workshops, and webinars, demonstrating complex data science concepts and practical applications. Mentorship: Providing guidance, support, and constructive feedback to students/clients on their data science projects, helping them understand challenges, refine models, and build practical skills. Curriculum Development: Collaborating with the team to continuously refine and update data science course curricula based on industry trends, new research, and student feedback. Key Responsibilities: Design and develop high-quality data science course content covering statistical modeling, machine learning algorithms, deep learning fundamentals, and data analysis techniques. Prepare and deliver engaging live sessions on topics such as predictive modeling, natural language processing (NLP), computer vision, time series analysis, and experiment design. Guide and mentor students through real-world data science projects, helping them with data collection, cleaning, feature engineering, model building, evaluation, and interpretation. Simplify complex algorithms and theoretical concepts into easily understandable and actionable insights for a diverse audience. Create practical assignments, case studies, and capstone projects that reinforce learning and develop problem-solving skills. Stay updated with the latest advancements in data science, machine learning, and AI to ensure content relevance. Required Skills & Experience: Experience: 2 to 4 years of hands-on industry experience as a Data Scientist or in a similar analytical role. Communication: Excellent and compulsory English communication skills (both written and verbal) – ability to articulate complex technical concepts clearly and concisely to diverse audiences is paramount. Passion for Teaching: A strong desire and aptitude for training, mentoring, and guiding aspiring data science professionals. Analytical Skills: Strong problem-solving abilities, critical thinking, and a structured approach to breaking down complex data problems. Work Ethic: Highly motivated, proactive, and able to work independently as well as collaboratively in a fast-paced environment. Location Commitment: Must be willing to work from our Ahmedabad office full-time . Required Technical Skills: Proficiency in Python (Numpy, Pandas, Scikit-learn, Matplotlib, Seaborn) and/or R for data manipulation, analysis, and visualization. Strong understanding of Machine Learning algorithms (Regression, Classification, Clustering, Ensemble methods) and their applications. Experience with deep learning frameworks (TensorFlow, Keras, or PyTorch) is a significant plus. Solid SQL skills for data extraction and querying from relational databases. Experience with data visualization tools (e.g., Tableau, Power BI, Looker) or libraries (Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, Azure, GCP) for data science services is a plus. Knowledge of statistical concepts and hypothesis testing. What We Offer: A competitive salary and attractive performance-based incentives . The unique opportunity to directly impact the careers of aspiring tech professionals. A collaborative, innovative, and supportive work environment. Continuous learning and professional growth opportunities in a niche domain. Be a part of a rapidly growing team focused on global client engagement.

Posted 2 weeks ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Noida

Work from Office

We are looking for a skilled Python Developer with expertise in Django to join our team at NextGen Web Services. The ideal candidate will have 1 to 4 years of experience and be available to work remotely. Roles and Responsibility Design, develop, and test software applications using Python and Django. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop high-quality, scalable, and efficient code. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with industry trends and emerging technologies. Job Requirements Proficiency in Python programming language. Experience with Django framework. Strong understanding of software development principles and methodologies. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Additional Info The company offers a dynamic and supportive work environment, with opportunities for professional growth and development.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

10 - 14 Lacs

Noida

Work from Office

We are looking for a highly skilled AI & ML Engineer with 3 to 5 years of experience to join our team at Stanra Tech Solutions. The ideal candidate will have a strong background in artificial intelligence and machine learning, with excellent problem-solving skills. Roles and Responsibility Design and develop AI and ML models to solve complex problems. Collaborate with cross-functional teams to integrate AI and ML solutions into existing systems. Develop and maintain large-scale data pipelines and architectures. Conduct research and stay updated on the latest trends and technologies in AI and ML. Work closely with stakeholders to understand business requirements and develop tailored solutions. Ensure high-quality code and adhere to best practices. Job Requirements Strong proficiency in programming languages such as Python, Java, or C++. Experience with deep learning frameworks like TensorFlow or PyTorch. Knowledge of computer vision, natural language processing, or reinforcement learning. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced environment and meet deadlines.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Bengaluru

Work from Office

We are seeking developers who are eager to grow, experiment, and work on challenging technical problems. As an AI/ML Engineer-I at GeekyAnts, you'll be joining a fast-paced environment where you'll contribute to real projects from Day 1 and gain exposure to the latest trends in software engineering and AI. Key Responsibilities Write clean, efficient, and scalable code in Python. Work closely with cross-functional teams to design, develop, and deliver high-quality software solutions. Collaborate with senior developers and team leads for code reviews and knowledge sharing. Continuously learn and adapt to new technologies and frameworks. Contribute to project discussions, documentation, and team activities. Ideal Candidate Profile Technical Skills: Strong grasp of Python programming (Mandatory). Completed a Python course or certification from a recognized institute (Preferred). Basic understanding of Artificial Intelligence or exposure to AI projects is a plus. Familiarity with version control (e.g., Git) and collaborative tools (e.g., Jira, Slack) is appreciated. Soft Skills: Excellent verbal and written communication skills. Strong problem-solving mindset and willingness to take initiative. • Open to feedback and capable of working in a dynamic, team-oriented setting.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Experience - 8 + Years Location - Mumbai , Pune AI/Data Science Lang chain expert skilled in python for creating new chains or fixing existing ones, also leverage ML or Gen AI models for data extraction. Medium Programming Proficiency: Especially in Python (NumPy, Pandas, Scikit-learn), and optionally R or Java Data Lake (Microsoft SQL Server, Azure Cosmos, Azure Data Lake) Machine Learning Algorithms: Supervised, unsupervised, and ensemble methods (e.g., decision trees, SVMs, XGBoost) Cloud Computing Deep Learning: Understanding of neural networks, CNNs, RNNs using frameworks like TensorFlow or PyTorch Data Handling & Preprocessing: Cleaning, transforming, and visualizing large datasets Gen AI, Open AI, AI Search Version Control: Git/GitHub for collaboration and reproducibility Problem Solving & Critical Thinking: Ability to frame business problems as data science tasks

Posted 2 weeks ago

Apply

0 years

1 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description I Vision Infotech is a full-fledged IT company delivering high-quality, cost-effective, and reliable web and e-commerce solutions. Founded in 2011 and located in India, we serve both domestic and international clients, including those from the USA, Malaysia, Australia, Canada, and the United Kingdom. We offer a wide range of services in web design, development, e-commerce, and mobile app development across various platforms such as Android, iOS, Windows, and Blackberry. Role Description This is a full-time on-site role for a Python Developer – Data Science & Django, located in Ahmedabad. The Python Developer will be responsible for developing back-end web applications, writing efficient and maintainable code, and collaborating with cross-functional teams to integrate user-facing elements with server-side logic. Day-to-day tasks include designing, developing, and maintaining software applications and databases, ensuring high performance and responsiveness. The role involves object-oriented programming, software development, and working closely with data science teams to implement and optimize data-driven solutions. Key Responsibilities Write clean and efficient Python code Develop web applications using Django Work on data collection, processing, and visualization using Pandas/Numpy/Matplotlib Build REST APIs using Django REST Framework Collaborate with the team for real-world project deployments Required Skills Strong knowledge of Python programming Basic understanding of data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn) Hands-on experience with Django or Django REST Framework Understanding of HTML, CSS, JavaScript (Basic level) Knowledge of Git is a plus B.E/B.tech/BCA/MCA/BSc/MSc (CS/IT) or equivalent Perks And Benefits Real-time project exposure Internship/Training certificate Flexible working hours Skills:- Python

Posted 2 weeks ago

Apply

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

You will be responsible for the deployment and maintenance of the group data science platform infrastructure, on which data science pipelines are deployed and scaled. To achieve this, you will collaborate with Data Scientists and Data Engineers from various business lines and the Global Technology Service infrastructure team (GTS). Roles : Implement techniques and processes for supporting the development and scaling of data science pipelines. Industrialize inference, retraining, monitoring data science pipelines, ensuring their maintainability and compliance. Provide platform support to end-users. Be attentive to the needs and requirements expressed by the end-users. Anticipate needs and necessary developments for the platform. Work closely with Data Scientists, Data Engineers, and business stakeholders. Stay updated and demonstrate a keen interest in the ML OPS domain. Environment : Cloud on-premise, AZure Python, Kubernetes Integrated vendor solutions: Dataiku, Snowflake DB: PostGreSQL Distributed computing: Spark Big Data: Hadoop, S3/Scality, MAPR Datascience: Scikit-learn, Transformers, ML Flow, Kedro, DevOps, CI/CD: JFROG, Harbor, Github Actions, Jenkins Monitoring: Elastic Search/Kibana, Grafana, Zabbix Agile ceremonies: PI planning, Sprint, Sprint Review, Refinement, Retrospectives, ITIL framework Technical Skills : Python FastApi, SqlAlchemy Numpy, Pandas, Scikit-learn, Transformers Kubernetes, Docker Pytest CI/CD: Jenkins, Ansible, GitHub Action, Harbor, Docker Soft Skills : Client Focus: Demonstrate strong listening skills, understanding, and anticipation of user needs. Team Spirit: Organize collaboration, workshops to find the best solutions. Share expertise with colleagues to find the most suitable solutions. Innovation: Propose innovative ideas, solutions, or strategies, and think out the box. Prefer simplicity over complexity. Responsibility: Take ownership, keep commitments and respect deadlines

Posted 2 weeks ago

Apply

1.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Developing Image Processing/Computer Vision Applications to be deployed in Vision Inspection. More specifically: Write OpenCV applications in C++ for dimensional and surface inspection. Train DL models for surface inspection in cases where traditional OpenCV / C++ is not viable. Additional Responsibilities: Developing and optimizing OpenCV applications for embedded devices (raspberry PI). Assist in developing DL models for the inhouse DL cloud platform. Educational Qualification B. E/ B. Tech in Any Specialization. Skills Python. C++. OpenCV. Linux. PyTorch/TensorFlow. Who can apply Are available for the full time employment with one year contract. Can start immediately. Have relevant skills and interests. Perks: Certificate. Letter of recommendation. Job offer. Group Health Insurance. Incentives and Bonus as per HR policy.

Posted 2 weeks ago

Apply

0.0 - 2.0 years

3 - 10 Lacs

Kolkata, West Bengal

Remote

Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

Bhopal, Hyderabad, Pune

Hybrid

Hello Greetings from NewVision Software!! We are hiring on an immediate basis for the role of Senior / Lead Python Developer + AWS | NewVision Software | Pune, Hyderabad & Bhopal Location | Fulltime Looking for professionals who can join us Immediately or within 15 days is preferred. Please find the job details and description below. NewVision Software PUNE HQ OFFICE 701 &702, Pentagon Tower, P1, Magarpatta City, Hadapsar, Pune, Maharashtra - 411028, India NewVision Software The Hive Corporate Capital, Financial District, Nanakaramguda, Telangana - 500032 NewVision Software IT Plaza, E-8, Bawadiya Kalan Main Rd, near Aura Mall, Gulmohar, Fortune Pride, Shahpura, Bhopal, Madhya Pradesh - 462039 Senior Python and AWS Developer Role Overview: We are looking for a skilled senior Python Developer with strong background in AWS cloud services to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust backend systems, ensuring high performance and responsiveness to requests from the front end. Responsibilities : Develop, test, and maintain scalable web applications using Python and Django. Design and manage relational databases with PostgreSQL, including schema design and optimization. Build RESTful APIs and integrate with third-party services as needed. Work with AWS services including EC2, EKS, ECR, S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, and RDS. Collaborate with front-end developers to deliver seamless end-to-end solutions. Write clean, efficient, and well-documented code following best practices. Implement security and data protection measures in applications. Optimize application performance and troubleshoot issues as they arise. Participate in code reviews, testing, and continuous integration processes. Stay current with the latest trends and advancements in Python, Django, and database technologies. Mentor junior python developers. Requirements : 6+ years of professional experience in Python development. Strong proficiency with Django web framework. Experience working with PostgreSQL, including complex queries and performance tuning. Familiarity with RESTful API design and integration. Strong understanding of OOP, SOLID principles, and design patterns. Strong knowledge of Python multithreading and multiprocessing. Experience with AWS services: S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, IAM, Secret Manager, KMS and RDS. Understanding of version control systems (Git). Knowledge of security best practices and application deployment. Basic understanding of Microservices architecture. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Nice to Have Experience with Docker, Kubernetes, or other containerization tools. Good to have front-end technologies (React). Experience with CI/CD pipelines and DevOps practices. Experience with infrastructure as code tools like Terraform. Education : Bachelors degree in computer science engineering or related field (or equivalent experience). Do share your resume with my email address: imran.basha@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Exp: Python: Yrs, AWS: Yrs, PostgreSQL: Yrs Rest API: Yrs, Django: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - imran.basha@newvision-software.com

Posted 2 weeks ago

Apply

50.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About The Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location INB905E Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About Your Team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About Your Role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Preferable to have trade lifecycle experience in a vendor system such as CRD, Aladdin, etc Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 2 weeks ago

Apply

50.0 years

5 - 7 Lacs

Gurgaon

On-site

About the Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location INB905E Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About your team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About your role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Preferable to have trade lifecycle experience in a vendor system such as CRD, Aladdin, etc Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Gurugram

Work from Office

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelors degree in Computer Science, Data Science, Statistics, or related field. Masters degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently.

Posted 2 weeks ago

Apply

0 years

1 - 1 Lacs

Mohali

On-site

About the Role We are looking for a passionate Data Science fresher who has completed at least 6 months of practical training, internship, or project experience in the data science field. This is an exciting opportunity to apply your analytical and problem-solving skills to real-world datasets while working closely with experienced data scientists and engineers. Key Responsibilities Assist in data collection, cleaning, and preprocessing from various sources. Support the team in building, evaluating, and optimizing ML models . Perform exploratory data analysis (EDA) to derive insights and patterns. Work on data visualization dashboards and reports using tools like Power BI, Tableau, or Matplotlib/Seaborn. Collaborate with senior data scientists and domain experts on ongoing projects. Document findings, code, and models in a structured manner. Continuously learn and adopt new techniques, tools, and frameworks. Required Skills & Qualifications Education: Bachelor’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. Experience: Minimum 6 months internship/training in data science, analytics, or machine learning. Technical Skills: Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.). Understanding of machine learning algorithms (supervised/unsupervised). Knowledge of SQL and database concepts. Familiarity with data visualization tools/libraries. Basic understanding of statistics and probability. Soft Skills: Strong analytical thinking and problem-solving ability. Good communication and teamwork skills. Eagerness to learn and grow in a dynamic environment. Good to Have (Optional) Exposure to cloud platforms (AWS, GCP, Azure). Experience with big data tools (Spark, Hadoop). Knowledge of deep learning frameworks (TensorFlow, PyTorch). What We Offer Opportunity to work on real-world data science projects . Mentorship from experienced professionals in the field. A collaborative, innovative, and supportive work environment. Growth path to become a full-time Data Scientist with us. Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹15,000.00 per month Benefits: Health insurance Schedule: Day shift Fixed shift Monday to Friday Application Question(s): have you done your 6 month training ? Education: Bachelor's (Preferred) Language: English (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

6.0 - 9.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hi Everyone, Experience : 6-9yrs Work Mode: Hybrid Work location : Chennai/Bangalore/Pune Location Notice Period : Immediate - 30 days Role : Data Scientist Skills and Experience: 5 year with Data science and ML exposure Key Accountabilities & Responsibilities Support the Data Science team with the development of advanced analytics/machine learning/artificial intelligence initiatives Analyzing large and complex datasets to uncover trends and insights. Supporting the development of predictive models and machine learning workflows. Performing exploratory data analysis to guide product and business decisions. Collaborating with cross-functional teams, including product, marketing, and engineering. Assisting with the design and maintenance of data pipelines. Clearly documenting and communicating analytical findings to technical and non-technical stakeholders. Basic Qualifications: Qualificiation in Data Science, Statistics, Computer Science, Mathematics, or a related field. Proficiency in Python and key data science libraries (e.g., pandas, NumPy, scikit-learn). Operational understanding of machine learning principles and statistical modeling. Experience with SQL for data querying. Strong communication skills and a collaborative mindset. Preferred Qualifications Exposure to cloud platforms such as AWS, GCP, or Azure. Familiarity with data visualization tools like Tableau, Power BI, or matplotlib. Participation in personal data science projects or online competitions (e.g., Kaggle). Understanding of version control systems like Git. Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 2 weeks ago

Apply

2.0 years

2 - 9 Lacs

Bengaluru

On-site

Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role and Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows. Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Requirements Bachelor’s degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Benefits Opportunities to Take Ownership – Work on high-impact projects with real autonomy. Fast Career Growth – Gain exposure to multiple business areas and advance quickly. Be at the Forefront of Innovation – Work on cutting-edge technologies or disruptive ideas. Collaborative & Flat Hierarchy – Work closely with leadership and have a real voice. Dynamic, Fast-Paced Environment – No two days are the same; challenge yourself every day. A Mission-Driven Company – Be part of something that makes a difference

Posted 2 weeks ago

Apply

3.0 - 6.0 years

10 - 20 Lacs

Gurugram

Hybrid

Role & responsibilities Highly focused individual with self-driven attitude Problem solving and logical thinking to automate and improve internal processes Using various tools such as SQL and Python for managing the various requirements for different data asset projects. Ability to diligently involve in activities like Data Cleaning, Retrieval, Manipulation, Analytics and Reporting Using data science and statistical techniques to build machine learning models and deal with textual data. Keep up-to-date knowledge of the industry and related markets Ability to multitask, prioritize, and manage time efficiently Understand needs of the hiring organization or client in order to target solutions to their benefit Advanced speaking and writing skills for effective communication Ability to work in cross functional teams demonstrating high level of commitment and coordination Attention to details and commitment to accuracy for the desired deliverable Should demonstrate and develop a sense of ownership towards the assigned task Ability to keep sensitive business information confidential Contribute, positively and extensively towards building the organizational reputation, brand and operational excellence Preferred candidate profile 3-6 years of relevant experience in data science Advanced knowledge of statistics and basics of machine learning Experienced in dealing with textual data and using natural language processing techniques Ability to conduct analysis to extract actionable insights Technical skills in Python (Numpy, Pandas, NLTK, transformers, Spacy), SQL and other programming languages for dealing with large datasets Experienced in data cleaning, manipulation, feature engineering and building models Experienced in the end-to-end development of a data science project Strong interpersonal skills and extremely resourceful Proven ability to complete assigned task according to the outlined scope and timeline Good language, communication and writing skills in English Expertise in using tools like MS Office, PowerPoint, Excel and Word Graduate or Post-graduate from a reputed college or university

Posted 2 weeks ago

Apply

1.0 years

3 - 4 Lacs

India

On-site

About Us: Red & White Education Pvt. Ltd., established in 2008, is Gujarats top NSDC & ISO-certified institute focused on skill-based education and global employability. Role Overview: Were hiring a full-time Onsite AI, Machine Learning, and Data Science Faculty/ Trainer with strong communication skills and a passion for teaching, Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML : 1 year (Required) Data science: 1 year (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Strong proficiency in Python, with deep understanding of object-oriented programming (OOP) principles. Experience designing and implementing modular, reusable, and scalable architectures Experience using Version control systems like GIT and Git Workflows. Familiarity with UML and architectural modeling tools. Expertise in design patterns (e.g., Factory, Singleton, Observer) Experience with Python Packages : NumPy, Pandas, Matplotlib, PyTest, black, flake8 Solid grasp of software development lifecycle (SDLC) and Agile methodologies Knowledge of unit testing frameworks (e.g., pytest) and mocking Interpretation of VBA macros. Performing Code quality checks. Ability to understand Engineering workflows , requirements and communicate with Stakeholders Nice to have: Knowledge on stress engineering tools like ISAMI Fluent in written and spoken English for global collaboration Produces high-quality documentation, reports, and presentations Confident in leading discussions and negotiations in English Skilled in writing clear and concise emails, user stories, and technical specs Comfortable presenting to international stakeholders and executive audiences

Posted 2 weeks ago

Apply

5.0 years

10 - 15 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Computer Vision Engineer to join our growing AI team. This role is ideal for someone with strong expertise in deep learning and a solid background in real-time video analytics, model deployment, and computer vision applications. You'll be responsible for developing scalable computer vision pipelines and deploying them across cloud and edge environments, helping build intelligent visual systems that solve real-world problems. Key Responsibilities: Model Development & Training: Design, train, and optimize deep learning models for object detection, segmentation, and tracking using frameworks like YOLO, UNet, Mask R-CNN, and Deep SORT. Computer Vision Applications: Build robust pipelines for computer vision applications including image classification, real-time object tracking, and video analytics using OpenCV, NumPy, and TensorFlow/PyTorch. Deployment & Optimization: Deploy trained models on Linux-based GPU systems and edge devices (e.g., Jetson Nano, Google Coral), ensuring low-latency performance and efficient hardware utilization. Real-Time Inference: Implement and optimize real-time inference systems, ensuring minimal delay in video processing pipelines. Model Management: Utilize tools like Docker, Git, and MLflow (or similar) for version control, environment management, and model lifecycle tracking. Collaboration & Documentation: Work cross-functionally with hardware, backend, and software teams. Document designs, architectures, and research findings to ensure reproducibility and scalability. Technical Expertise Required: Languages & Libraries: Advanced proficiency in Python and solid experience with OpenCV, NumPy, and other image processing libraries. Deep Learning Frameworks: Hands-on experience with TensorFlow, PyTorch, and integration with model training pipelines. Computer Vision Models: Object Detection: YOLO (all versions) Segmentation: UNet, Mask R-CNN Tracking: Deep SORT or similar Deployment Skills: Real-time video analytics implementation and optimization Experience with Docker for containerization Version control using Git Model tracking using MLflow or comparable tools Platform Experience: Proven experience in deploying models on Linux-based GPU environments and edge devices (e.g., NVIDIA Jetson family, Coral TPU). Professional & Educational Requirements: Education: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related discipline. Experience: Minimum 5 years of industry experience in AI/ML with a strong focus on computer vision and system-level design. Proven portfolio of production-level projects in image/video processing or real-time systems. Preferred Qualities: Strong problem-solving and debugging skills Excellent communication and teamwork capabilities A passion for building smart, scalable vision systems A proactive and independent approach to research and implementation

Posted 2 weeks ago

Apply

2.5 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview Global Business Services delivers control functions responsible for providing assurance over data and processes used to record risk, P/L, balance sheet and financial results in support of Global Markets, Corporate Treasury and Corporate Investments businesses. There are seven GMO core functions that are critical in ensuring business process control: New Business Development, Trade Capture Substantiation, P&L Validation, Risk/Position Validation, Balance Sheet Substantiation, Event Monitoring, and Front-Back Process Oversight. The QS or Quantitative Services Team is part of the GBS. The Data Science Group is involved in Development, testing and monitoring of Machine Learning Models. Job Description Associate would be involved in entire Machine Learning Development Lifecycle. Responsibilities Collaborate with stake holders and identify opportunities for leveraging huge amount of unstructured financial data. Analyze and model structured data using statistical methods and implement algorithms and software needed to perform analyses Undertake preprocessing of structured/unstructured data and analyze information to discover trends/patterns. Present information using data visualization techniques. Coordinate with different teams to implement the models and monitor outcomes. Build machine learning models to automate processes and reduce operational overhead. Perform continuous model monitoring and model support. Build processes and tools for analyzing model performance and data accuracy. Requirements Education : Degree in Applied Math, Statistics, Computer Science or any other quantitative field from premier institutes Experience : 2.5 to 4 years Certifications (If any) : NA Foundational Skills : Strong technical skills including experience using Python/R and SQL or any object-oriented languages Familiarity with various machine learning algorithms and modelling techniques e.g. Regression (Linear and logit), Classification (SVM, Naïve Bayes,etc.) Banking domain is preferred but not required. Strong analytical/math skills (e.g. statistics, algebra) Familiarity with pandas, numpy, scikit-learn & scipy Basics of Visualization tools & techniques – eg. maptlotlib Desired Skills : Problem-solving aptitude Excellent communication and presentation skills Total Experience not more than 5 years and 2 Years of relevant experience should be preferred Work Timings : 12:00-9:00PM Job Location : Hyderabad

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies