Jobs
Interviews

497 Scipy Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

This internship offers a unique opportunity to contribute to the development of cutting-edge tools in aviation safety and data analysis, leveraging the power of Machine Learning. Your mission for this internship is to develop an interactive tool for mapping Flight Data Recorders parameters based on Systems and subsystems. To excel in this internship, you will need to become familiarized with the system-wide functionality of Airbus aircraft. Your tasks will include codifying the potentially-affected parameters for a given incident to map the fault propagation tree, conducting research on, and ultimately implementing, the most appropriate Machine Learning algorithm(s) for this particular time-series application, as well as building data visualization and user-interface tools. By the end of this internship, you will have delivered a robust tool that provides an exhaustive list of FDR parameters along with a practical representation of how and what all systems are impacted with respect to a particular fault. Additionally, you will have provided a first-order proof of concept of the applicability of Machine Learning techniques in this space as demonstrated by a limited-scope use case. Required Skills: - Strong programming skills in Python. Knowledge of key machine learning libraries (Scikit-learn, Tensorflow) and scientific computing libraries (e.g., SciPy) is an asset. - Experience with Machine Learning (ML) for time-series applications, including exposure to supervised and unsupervised learning algorithms, an understanding of data preprocessing, feature engineering, and data visualization methods. - Experience with data visualization libraries (e.g., Matplotlib, Plotly, D3.js). - Familiarity with UI/UX design principles for intuitive interfaces would be a plus.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

uttar pradesh

On-site

We are seeking an enthusiastic and skilled Python Developer with a passion for AI-based application development to join our expanding technology team. This role presents an opportunity to work at the intersection of software engineering and data analytics, contributing to cutting-edge AI-driven solutions that have a real business impact. If you possess a strong foundation in Python, a knack for problem-solving, and a keen interest in building intelligent systems, we are excited to meet you! As a Python Developer at ARDEM Data Services Private Limited, your key responsibilities will include: - Developing and deploying AI-focused applications using Python and associated frameworks. - Collaborating with Developers, Product Owners, and Business Analysts to design and implement machine learning pipelines. - Creating interactive dashboards and data visualizations to derive actionable insights. - Automating data collection, transformation, and processing tasks. - Utilizing SQL for data extraction, manipulation, and database management. - Applying statistical methods and algorithms to extract insights from large datasets. The ideal candidate should have: - 2-3 years of experience as a Python Developer, along with a robust portfolio of relevant projects. - A Bachelor's degree in Computer Science, Data Science, or a related technical field. - In-depth knowledge of Python, including frameworks and libraries such as NumPy, Pandas, SciPy, and PyTorch. - Proficiency in front-end technologies like HTML, CSS, and JavaScript. - Familiarity with SQL and NoSQL databases and their best practices. - Excellent communication and team-building skills. - Strong problem-solving abilities with a focus on innovation and self-learning. - Knowledge of cloud platforms such as AWS is a plus. Technical Requirements: - Laptop or Desktop: Windows (i5 or higher, 8GB RAM minimum) - Screen: 14 inches, Full HD (1920x1080) - Internet Speed: 100 Mbps or higher About ARDEM: ARDEM is a prominent Business Process Outsourcing and Business Process Automation service provider with a successful track record of over two decades. We deliver outsourcing and automation services to clients in the USA and Canada, focusing on continuous innovation and excellence. We are committed to becoming the leading Business Process Outsourcing and Business Process Automation company by consistently delivering top-notch services to our customers. Please note: ARDEM will never request personal or banking information during the hiring process for any data entry/processing roles. Any communication claiming to offer work-from-home jobs on behalf of ARDEM Incorporated is fraudulent. Please disregard such messages and refer to ARDEM's Careers page for genuine job opportunities. We apologize for any inconvenience caused by such deceptive practices.,

Posted 1 week ago

Apply

1.0 years

5 - 8 Lacs

Bhubaneshwar

On-site

Company Introduction iServeU is a modern banking infrastructure provider in APAC region, empowering financial enterprises with embedded fintech solutions for their customers. iServeU is one of the few certified partners with National Payment Corporation of India (NPCI), VISA for various products. iServeU also provides a cloud-native, micro services-enabled, distributed platform with over 5000 possible product configurations with a low code/no code interface to banks, NBFCs, Fintech, and other regulated entities. - We process around 2500 transactions per second by levering distributed & auto scale technology like K8. - Our core platform combines of 1200+ micro services. - Our customer list includes Fintech start-ups, top tier private banks to PSU bank. We operate in five countries and help customers constantly change the way financial institutions operate and innovate. - Our solutions currently empowers over 20 banks and 250+ enterprises across India and abroad. - Our platform seamlessly manages the entire transaction lifecycle, including withdrawals, deposits, transfers, payments, and lending through various channels like digital, branch, agents. Our team of 500+ employees, with over 80% in technology roles is spread across offices in Bhubaneswar, Bangalore and Delhi. We have raised $8 million in funding to support our growth and innovation. For more details visit: www.iserveu.in Job Position : Research Assistant Location: Bhubaneswar Reports To: CTO Job Summary We are seeking a highly motivated and detail-oriented Research Assistant to support our ongoing research projects. The successful candidate will play a crucial role in various stages of the research lifecycle, from data collection and analysis to literature review and administrative support. This position is ideal for an organized individual with strong analytical skills and a passion for [specific research area, if applicable, e.g., artificial intelligence, data science, embedded systems, cybersecurity, cryptography, blockchain]. Key Responsibilities • Literature Review & Information Gathering: o Conduct comprehensive literature searches using various databases and resources. o Summarize, synthesize, and critically evaluate relevant research papers, articles, and reports. o Maintain an organized database of research materials and citations. • Data Collection & Management: o Assist in the design and development of data collection instruments (e.g., surveys, experimental protocols, data pipelines). o Collect, organize, and manage research data, ensuring accuracy, completeness, and adherence to ethical guidelines. o Maintain meticulous records of research activities and data sources. o [If applicable: Develop scripts for data extraction, perform data cleaning, manage large datasets.] • Data Analysis & Interpretation: o Assist with preliminary data analysis using statistical software (e.g., R, Python, MATLAB) or specialized computational tools. o Generate tables, charts, and graphs to visualize data findings. o Contribute to the interpretation of results and identification of key insights. o Summarize research papers for easy consumption. • Programming & Prototyping: o Assist in developing and testing software prototypes, algorithms, or models relevant to the research. o Write clean, well-documented, and efficient code. o Debug and troubleshoot technical issues. • Report Writing & Dissemination: o Assist in drafting, editing, and formatting research reports, presentations, and manuscripts. o Prepare summaries of findings for internal and external communication. o Ensure all written materials adhere to academic/professional standards and guidelines. • Project Coordination & Administrative Support: o Assist with the day-to-day coordination of research projects, including scheduling meetings, managing timelines, and tracking progress. o Handle general administrative tasks as needed to support the research team. o Ensure compliance with all relevant research protocols, ethical guidelines, and institutional policies. Requirements Required Qualifications • Bachelor's degree in Computer Science, Software Engineering, Data Science, or a closely related technical field. • 1-2 years of experience in a research setting (can include academic projects, internships, or previous research assistant roles). • Less than 28 years of age, candidates enrolled in PhD programs can apply. • Strong analytical and critical thinking skills. • Excellent written and verbal communication skills. • Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint). • High level of attention to detail and accuracy in data handling and record-keeping. • Ability to work independently and collaboratively within a team environment. • Strong organizational and time management skills, with the ability to manage multiple tasks simultaneously. • Demonstrated ability to learn new software and research methodologies quickly. Preferred Qualifications • Experience with programming languages commonly used in research (e.g., Python, Java, C++). • Familiarity with data analysis libraries (e.g., Pandas, NumPy, SciPy) or machine learning frameworks (e.g., TensorFlow, PyTorch). • Experience with version control systems (e.g., Git). • Familiarity with research ethics and human/animal subject protection protocols (IRB/IACUC). • Experience in computational modeling, algorithm design, data mining, machine learning, or software development for research. • Prior experience with cloud computing platforms (e.g., AWS, GCP, Azure) for research purposes or specialized research software/tools. • A strong interest in [specific sub-field of research, e.g., Artificial Intelligence, Machine Learning, Cybersecurity, Human-Computer Interaction, Data Science, Computer Vision, Natural Language Processing].

Posted 1 week ago

Apply

1.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Company Introduction iServeU is a modern banking infrastructure provider in APAC region, empowering financial enterprises with embedded fintech solutions for their customers. iServeU is one of the few certified partners with National Payment Corporation of India (NPCI), VISA for various products. iServeU also provides a cloud-native, micro services-enabled, distributed platform with over 5000 possible product configurations with a low code/no code interface to banks, NBFCs, Fintech, and other regulated entities. We process around 2500 transactions per second by levering distributed & auto scale technology like K8. Our core platform combines of 1200+ micro services. Our customer list includes Fintech start-ups, top tier private banks to PSU bank. We operate in five countries and help customers constantly change the way financial institutions operate and innovate. Our solutions currently empowers over 20 banks and 250+ enterprises across India and abroad. Our platform seamlessly manages the entire transaction lifecycle, including withdrawals, deposits, transfers, payments, and lending through various channels like digital, branch, agents. Our team of 500+ employees, with over 80% in technology roles is spread across offices in Bhubaneswar, Bangalore and Delhi. We have raised $8 million in funding to support our growth and innovation. For more details visit: www.iserveu.in Job Position : Research Assistant Location: Bhubaneswar Reports To: CTO Job Summary We are seeking a highly motivated and detail-oriented Research Assistant to support our ongoing research projects. The successful candidate will play a crucial role in various stages of the research lifecycle, from data collection and analysis to literature review and administrative support. This position is ideal for an organized individual with strong analytical skills and a passion for [specific research area, if applicable, e.g., artificial intelligence, data science, embedded systems, cybersecurity, cryptography, blockchain]. Key Responsibilities Literature Review & Information Gathering: Conduct comprehensive literature searches using various databases and resources. Summarize, synthesize, and critically evaluate relevant research papers, articles, and reports. Maintain an organized database of research materials and citations. Data Collection & Management: Assist in the design and development of data collection instruments (e.g., surveys, experimental protocols, data pipelines). Collect, organize, and manage research data, ensuring accuracy, completeness, and adherence to ethical guidelines. Maintain meticulous records of research activities and data sources. [If applicable: Develop scripts for data extraction, perform data cleaning, manage large datasets.] Data Analysis & Interpretation: Assist with preliminary data analysis using statistical software (e.g., R, Python, MATLAB) or specialized computational tools. Generate tables, charts, and graphs to visualize data findings. Contribute to the interpretation of results and identification of key insights. Summarize research papers for easy consumption. Programming & Prototyping: Assist in developing and testing software prototypes, algorithms, or models relevant to the research. Write clean, well-documented, and efficient code. Debug and troubleshoot technical issues. Report Writing & Dissemination: Assist in drafting, editing, and formatting research reports, presentations, and manuscripts. Prepare summaries of findings for internal and external communication. Ensure all written materials adhere to academic/professional standards and guidelines. Project Coordination & Administrative Support: Assist with the day-to-day coordination of research projects, including scheduling meetings, managing timelines, and tracking progress. Handle general administrative tasks as needed to support the research team. Ensure compliance with all relevant research protocols, ethical guidelines, and institutional policies. Requirements Required Qualifications Bachelor's degree in Computer Science, Software Engineering, Data Science, or a closely related technical field. 1-2 years of experience in a research setting (can include academic projects, internships, or previous research assistant roles). Less than 28 years of age, candidates enrolled in PhD programs can apply. Strong analytical and critical thinking skills. Excellent written and verbal communication skills. Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint). High level of attention to detail and accuracy in data handling and record-keeping. Ability to work independently and collaboratively within a team environment. Strong organizational and time management skills, with the ability to manage multiple tasks simultaneously. Demonstrated ability to learn new software and research methodologies quickly. Preferred Qualifications Experience with programming languages commonly used in research (e.g., Python, Java, C++). Familiarity with data analysis libraries (e.g., Pandas, NumPy, SciPy) or machine learning frameworks (e.g., TensorFlow, PyTorch). Experience with version control systems (e.g., Git). Familiarity with research ethics and human/animal subject protection protocols (IRB/IACUC). Experience in computational modeling, algorithm design, data mining, machine learning, or software development for research. Prior experience with cloud computing platforms (e.g., AWS, GCP, Azure) for research purposes or specialized research software/tools. A strong interest in [specific sub-field of research, e.g., Artificial Intelligence, Machine Learning, Cybersecurity, Human-Computer Interaction, Data Science, Computer Vision, Natural Language Processing].

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for preparing data, developing models, testing them, and deploying them. This includes designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Your role will involve ensuring that algorithms generate accurate user recommendations. Additionally, you will work on turning unstructured data into useful information by auto-tagging images and converting text to speech. Solving complex problems with multi-layered data sets and optimizing existing machine learning libraries and frameworks will be part of your daily tasks. Your responsibilities will also include developing machine learning algorithms to analyze large volumes of historical data for making predictions. You will run tests, perform statistical analysis, and interpret the results, documenting machine learning processes. As a Lead Engineer in ML and Data Engineering, you will oversee the technologies, tools, and techniques used within the team. Collaboration with the team based on business requirements for designing the requirements is essential. You will ensure that development standards, policies, and procedures are adhered to and drive change to implement efficient and effective strategies. Working closely with peers in the business to fully understand the business process and requirements is crucial. Maintenance, debugging, and problem-solving will also be part of your job responsibilities. Ensuring that all software developed within your team meets the business requirements specified and showing flexibility to respond to the changing needs of the business are key aspects of the role. Your technical skills should include 4+ years of experience in Python, API development using Flask/Django, and proficiency in libraries such as Pandas, Numpy, Keras, Scipy, Scikit-learn, PyTorch, Tensor Flow, and Theano. Hands-on experience in Machine Learning (Supervised & Unsupervised) and familiarity with Data Analytics Tools & Libraries are required. Experience in Cloud Data Pipelines and Engineering (Azure/AWS) as well as familiarity with ETL Pipelines/DataBricks/Apache NiFi/Kafka/Talend will be beneficial. Ability to work independently on projects, good written and verbal communication skills, and a Bachelor's Degree in Computer Science/Engineering/BCA/MCA are essential qualifications for this role. Desirable skills include 2+ years of experience in Java.,

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi

On-site

About Us Udacity is on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey. Udacity's mission is to democratize education. We're an online learning platform offering groundbreaking education in artificial intelligence, machine learning, robotics, virtual reality, and more fields. Focused on self-empowerment through learning, Udacity is making innovative technologies such as self-driving cars available to a global community of aspiring technologists while enabling learners at all levels to skill up with essentials like programming, web, and app development. Udacity is looking for people to join our Mentor Ops team. If you love a challenge and truly want to make a difference in the world, read on! Are you passionate about learning and helping others learn? Do you find people management and mentoring rewarding? Are you technically sound and eager to learn about different technologies? Do you thrive in a fast-paced environment with a diversity of intellectual challenges? We are looking for you! Responsibilities: Technical skills of new mentors: Responsible for selecting technically strong mentors who can provide superlative services (Project reviews and answers on the Knowledge platform) to students Technical quality of Services: Responsible for deploying processes to ensure that new and existing mentors are providing high-quality, technically accurate services to the students Mentor classification: Direct ownership of validating whether any mentor is a low, medium, or high performer. The validation exercise to encompass all data related to mentor performance, including students' ratings, P2P audit scores, and data collected by manually checking the quality of services High-performing mentors: Build and lead a community of technically skilled, high-performing mentors who would serve as technical experts for an array of Nanodegree programs Ongoing Technical certification: In a scalable manner, build technical certification modules for the mentors to strengthen their technical skills and develop the competency to support students in a better manner Technical Mentor Ops processes: Ensure timely execution of parts of Mentor ops processes that need a technical understanding of concepts. Methods include getting higher-quality answers to the students through mentors and plagiarism investigations. Select and onboard Session lead : Identifying high-performing and technical strong Session leads who would be taking virtual connect sessions for the Enterprise and Scholarship clients New Initiatives: Work closely with the rest of the Mentor Operations, Student Services, Product, Engineering, and Content teams to launch new programs and opportunities for enterprise customers, students, mentors, and more What are we looking for: Technically strong bend of mind with eagerness to learn An intermediate proficiency in Python or a similar language and development experience. Proficiency in data analysis, visualization, engineering, SQL, data Warehouses, data lakes, and data pipelines on both AWS and Azure. Exceptional ability to collect, analyze, and report on data Experience in designing, launching, and managing operational processes to drive processes at scale Proven experience in prioritizing and multitasking multiple urgent needs Intuition to guide Udacity mentors by encouraging best practices for tutoring and giving feedback Flexibility in using or learning to use different methods for tracking and conveying information, such as Zendesk, Google Docs, forums, online chat programs, and email Ideal Candidate: Strong expertise in Python, SQL, NoSQL databases (e.g., MySQL, MongoDB), data visualization tools (e.g., Tableau), statistical analysis (using Python, R, scipy, statsmodels), and data manipulation libraries (e.g., pandas, dplyr). Proficient in data integration concepts, tools for building data pipelines (like Apache Airflow), and cloud platforms (e.g., AWS, Azure) for data services deployment. Knowledgeable in big data frameworks (e.g., Apache Hadoop, Apache Spark) for large-scale data processing and analysis. Strong Problem-solving skills Good at researching and troubleshooting technical issues Empathetic and a great mentor Excellent organizational skills and ability to meet deadlines Driven by data to constantly iterate and improve An excellent communicator who excels in listening, writing, and responding professionally A team player Benefits: Experience a rewarding work environment with Udacity's perks and benefits! At Udacity, we offer you the flexibility of working from home. We also have in-person collaboration spaces in Mountain View, Cairo, Dubai and Noida and continue to build opportunities for team members to connect in person Flexible working hours Paid time off Comprehensive medical insurance coverage for you and your dependents Employee wellness resources and initiatives (access to wellness platforms like Headspace, Modern Health ) Quarterly wellness day off Personalized career development Unlimited access to Udacity Nanodegrees What We Do Forging futures in tech is our vision. Udacity is where lifelong learners come to learn the skills they need, to land the jobs they want, and to build the lives they deserve. Don't stop there! Please keep reading... You've probably heard the following statistic: Most male applicants only meet 60% of the qualifications, while women and other marginalized candidates only apply if they meet 100% of the qualifications. If you think you have what it takes but don't meet every single point in the job description, please apply! We believe that historically, many processes disproportionately hurt the most marginalized communities in society- including people of color, working-class backgrounds, women and LGBTQ people. Centering these communities at our core is pivotal for any successful organization and a value we uphold steadfastly. Therefore, Udacity strongly encourages applications from all communities and backgrounds. Udacity is proud to be an Equal Employment Opportunity employer. Please read our blog post for "6 Reasons Why Diversity, Equity, and Inclusion in the Workplace Exists" Last, but certainly not least… Udacity is committed to creating economic empowerment and a more diverse and equitable world. We believe that the unique contributions of all Udacians is the driver of our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, color, religion, sex, gender, gender identity or expression, sexual orientation, marital status, national origin, ancestry, disability, medical condition (including genetic information), age, veteran status or military status, or any other basis protected by federal, state or local laws. As part of our ongoing work to build more diverse teams at Udacity, when applying, you will be asked to complete a voluntary self-identification survey. This survey is anonymous, we are unable to connect your application with your survey responses. Please complete this voluntary survey as we utilize the data for diversity measures in terms of gender and ethnic background in both our candidates and our Udacians. We consider this data seriously and appreciate your willingness to complete this step in the process, if you choose to do so. Udacity's Values Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist Udacity's Terms of Use and Privacy Policy

Posted 1 week ago

Apply

6.0 years

5 - 15 Lacs

India

On-site

Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Senior Data Scientist Req number: R5797 Employment type: Full time Worksite flexibility: Hybrid Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We’re searching for an experienced Senior Data Scientist who excels at statistical analysis, feature engineering, and end to end machine learning operations. Your primary mission will be to build and productionize demand forecasting models across thousands of SKUs, while owning the full model lifecycle—from data discovery through automated re training and performance monitoring. This is a Full-time and Hybrid position. Job Description What You’ll Do Advanced ML Algorithms: Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Data Analysis & Feature Engineering: • Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. TimeSeries Forecasting (Demand): • Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. MLOps & Model Lifecycle: • Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Statistical Analysis & Experimentation: • Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Collaboration & Leadership: Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. What You'll Need M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (Big Query, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: • Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies • Driving the technical discussions with clients along with Project Managers. • Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team • Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. • Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. • Mentoring and coaching junior Python/AI/ML engineers. • Sharing knowledge through knowledge-sharing technical presentations. • Implement new Python/AI features with high quality coding standards. Must-To Have: • B.Tech/B.E. in computer science, IT, Data Science, ML or related field. • Strong proficiency in Python programming language. • Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. • Proficient in Debugging and Exception Handling • Professional experience in developing and operating AI systems in production. • Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) • Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. • Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. • Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. • Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). • Experience in working in Agile methodology Good-To Have: • A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. • Python framework (Django/Flask/Fast API) & API integration. • AI/ML/DL/MLOops certification done by AWS. • Experience with OpenAI API. • Good in Japanese Language

Posted 1 week ago

Apply

6.0 years

5 - 15 Lacs

Jodhpur Char Rasta, Ahmedabad, Gujarat

On-site

Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

JOB DESCRIPTION We're seeking top talents for our AI engineering team to develop high-quality machine learning models, services, and scalable data processing pipelines. Candidates should have a strong computer science background and be ready to handle end-to-end projects, focusing on engineering. As an Applied AI ML Senior Associate within the Digital Intelligence team at JPMorgan, collaborate with all lines of business and functions to deliver software solutions. Experiment, develop, and productionize high-quality machine learning models, services, and platforms to make a significant impact on technology and business. Design and implement highly scalable and reliable data processing pipelines. Perform analysis and insights to promote and optimize business results. Contribute to a transformative journey and make a substantial impact on a wide range of customer products. Job Responsibilities Research, develop, and implement machine learning algorithms to solve complex problems related to personalized financial services in retail and digital banking domains. Work closely with cross-functional teams to translate business requirements into technical solutions and drive innovation in banking products and services. Collaborate with product managers, key business stakeholders, engineering, and platform partners to lead challenging projects that deliver cutting-edge machine learning-driven digital solutions. Conduct research to develop state-of-the-art machine learning algorithms and models tailored to financial applications in personalization and recommendation spaces. Design experiments, establish mathematical intuitions, implement algorithms, execute test cases, validate results, and productionize highly performant, scalable, trustworthy, and often explainable solutions. Collaborate with data engineers and product analysts to preprocess and analyze large datasets from multiple sources. Stay up-to-date with the latest publications in relevant Machine Learning domains and find applications for the same in your problem spaces for improved outcomes. Communicate findings and insights to stakeholders through presentations, reports, and visualizations. Work with regulatory and compliance teams to ensure that machine learning models adhere to standards and regulations. Mentor Junior Machine Learning associates in delivering successful projects and building successful careers in the firm. Participate and contribute back to firm-wide Machine Learning communities through patenting, publications, and speaking engagements. Required qualifications, capabilities and skills Expert in at least one of the following areas: Natural Language Processing, Knowledge Graph, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Demonstrated expertise in machine learning frameworks: Tensorflow, Pytorch, pyG, Keras, MXNet, Scikit-Learn. Strong programming knowledge of python, spark; Strong grasp on vector operations using numpy, scipy; Strong grasp on distributed computation using Multithreading, Multi GPUs, Dask, Ray, Polars etc. Strong analytical and critical thinking skills for problem solving. Excellent written and oral communication along with demonstrated teamwork skills. Demonstrated ability to clearly communicate complex technical concepts to both technical and non-technical audiences Experience in working in interdisciplinary teams and collaborating with other researchers, engineers, and stakeholders. A strong desire to stay updated with the latest advancements in the field and continuously improve one's skills Preferred qualification, capabilities and skills Deep hands-on experience with real-world ML projects, either through academic research, internships, or industry roles. Experience with distributed data/feature engineering using popular cloud services like AWS EMR Experience with large scale training, validation and testing experiments. Experience with cloud Machine Learning services in AWS like Sagemaker. Experience with Container technology like Docker, ECS etc. Experience with Kubernetes based platform for Training or Inferencing. Contributions to open-source ML projects can be a plus. Participation in ML competitions (e.g., Kaggle) and hackathons demonstrating practical skills and problem-solving abilities. Understanding of how ML can be applied to various domains like healthcare, finance, robotics, etc. ABOUT US

Posted 1 week ago

Apply

4.0 years

8 - 9 Lacs

Noida

On-site

Location: Noida / Gurgaon (Onsite – 5 Days a Week) Experience: 4 to 7 Years Employment Type: Full-Time Background Verification (BGV) : Mandatory post-selection About the Role We are looking for highly skilled Python Developers who are passionate about building scalable and reliable backend solutions. You will work in a dynamic, collaborative environment and contribute to robust system architecture, core development, and feature optimization using modern Python frameworks, libraries, and tools . Key Responsibilities Design, develop, test, and maintain backend components using Python and modern frameworks. Apply strong knowledge of OOPs concepts , data structures , and algorithms to write clean, efficient, and scalable code. Work with Python libraries such as NumPy, Pandas, SciPy, and Scikit-learn to build data-driven solutions. Collaborate with DevOps teams for seamless deployment using Docker and CI/CD pipelines . Work with MySQL databases to design schema, write queries, and ensure data integrity. Version control and code management using Git . Collaborate with cross-functional teams, participate in code reviews, and contribute to agile development. Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month Location Type: In-person Schedule: Day shift Experience: Python: 5 years (Required) SQL: 3 years (Required) Git: 1 year (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person Speak with the employer +91 8851582342

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Company Description NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. Job Description Write complex algorithms to get an optimal solution for real time problems Qualitative analysis and data mining to extract data, discover hidden patterns, and develop predictive models based on findings Developing processes to extract, transform and load data Use distributed computing to validate and process large volumes of data to deliver insights Evaluate technologies we can leverage, including open-source frameworks, libraries, and tools Interface with product and other engineering teams on a regular cadence Qualifications 3+ years of applicable data engineering experience, including Python & RESTful APIs In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Strong fundamentals in data mining & data processing methodologies Strong knowledge of data structures, algorithms and designing for performance, scalability and availability Sound understanding of Big Data & RDBMS technologies, such as SQL, Hive, Spark, Databricks, Snowflake or Postgresql Orchestration and messaging frameworks: Airflow Good experience working with Azure cloud platform Good experience working in containerization framework, Docker is a plus. Experience in agile software development practices and DevOps is a plus Knowledge of and Experience with Kubernetes is a plus Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.E. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an Advanced Modeling Manager in Sales Excellence COE at Accenture, you play a vital role in utilizing your expertise in machine learning algorithms, SQL, R or Python, Advanced Excel, and data visualization tools to generate business insights. Your primary responsibility is to build models and scorecards that aid business leaders in understanding trends and market drivers, ultimately improving processes and boosting sales. Working within the Center of Excellence (COE) Analytics Modeling Analysis team, you collaborate with various functions like Sales, Marketing, and Finance to collect and process data. Your analytical skills are put to use in developing insights to support decision-making, which you communicate effectively to stakeholders. Moreover, you contribute to developing industrialized solutions in coordination with the COE team. To excel in this role, you are expected to possess a Bachelor's degree or equivalent experience, along with at least five years of experience in data modeling and analysis. Your expertise in machine learning algorithms, SQL, R, or Python, along with proficiency in Advanced Excel and data visualization tools like Power Bi, Power Apps, Tableau, QlikView, and Google Data Studio, is crucial. Additionally, project management experience, strong business acumen, and attention to detail are valued traits. Furthermore, a Master's degree in Analytics or a related field, understanding of sales processes and systems, knowledge of Google Cloud Platform (GCP) and BigQuery, and experience in Sales, Marketing, Pricing, Finance, or related fields are considered advantageous. Familiarity with Salesforce Einstein Analytics, optimization techniques and packages such as Pyomo, SciPy, PuLP, Gurobi, CPLEX, and Power Apps is beneficial for this role. Join us at Accenture, where Sales Excellence thrives on empowering individuals to compete, win, and grow by leveraging data-driven insights and advanced modeling techniques to drive success.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a Senior Python Developer with a minimum of 5 years of experience in Python, specifically with expertise in Django. You have hands-on experience with Azure Web Application Services, Web Services, and API Development. Additionally, you are proficient in working with databases such as Postgres and have knowledge of API documentation tools like swagger or RAML. Your skill set includes Python libraries like Pandas, NumPy, and familiarity with Object Relational Mapper (ORM) libraries. You can integrate multiple data sources and databases into a single system and understand threading limitations and multi-process architecture in Python. Moreover, you have a basic understanding of front-end technologies like JavaScript, HTML5, and CSS3. You are knowledgeable about user authentication and authorization between multiple systems, servers, and environments. You understand the fundamental design principles behind scalable applications and have experience with event-driven programming in Python. You can optimize output for different delivery platforms such as mobile vs. desktop and create database schemas that support business processes. Strong unit testing and debugging skills are part of your expertise, along with proficiency in code versioning tools like Git, Mercurial, or SVN. You are familiar with ADS, Jira, Kubernetes, and CI/CD tools, as well as have knowledge of data mining and algorithms. Your primary skill set includes Python, SciPy, NumPy, Django, Django ORM, MySQL, DRF, and basic knowledge of Azure. As a Full Stack Developer in the IT Services & Consulting industry, you are responsible for software development and engineering. You hold a B.Tech/B.E. degree in Electronics/Telecommunication or Computers. The job location is in Noida, and you are expected to work full-time in a permanent role. The company, Crosslynx US LLC, is established in Product Engineering and Cloud Engineering, delivering innovative solutions to enterprises worldwide. With expertise in Trustworthy and Explainable AI, Embedded Software, Cloud Application Development, RF Design & Testing, and Quality Assurance, the company offers a healthy work environment with work-life balance. Employees have opportunities to collaborate with clients, work on innovative projects, and learn about cutting-edge technologies. If you possess the technical aptitude and skills required for this role, you are encouraged to apply and join the team at Crosslynx to boost your career.,

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Noida, Uttar Pradesh

On-site

Location: Noida / Gurgaon (Onsite – 5 Days a Week) Experience: 4 to 7 Years Employment Type: Full-Time Background Verification (BGV) : Mandatory post-selection About the Role We are looking for highly skilled Python Developers who are passionate about building scalable and reliable backend solutions. You will work in a dynamic, collaborative environment and contribute to robust system architecture, core development, and feature optimization using modern Python frameworks, libraries, and tools . Key Responsibilities Design, develop, test, and maintain backend components using Python and modern frameworks. Apply strong knowledge of OOPs concepts , data structures , and algorithms to write clean, efficient, and scalable code. Work with Python libraries such as NumPy, Pandas, SciPy, and Scikit-learn to build data-driven solutions. Collaborate with DevOps teams for seamless deployment using Docker and CI/CD pipelines . Work with MySQL databases to design schema, write queries, and ensure data integrity. Version control and code management using Git . Collaborate with cross-functional teams, participate in code reviews, and contribute to agile development. Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month Location Type: In-person Schedule: Day shift Experience: Python: 5 years (Required) SQL: 3 years (Required) Git: 1 year (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person Speak with the employer +91 8851582342

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

Hyderabad

Work from Office

About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities: Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, i.e. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up to date with the latest in machine learning and artificial intelligence and influence AI/ML for the Life science industry. Responsibilities 4 - 8 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open-source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Are you seeking an exciting opportunity to become part of a dynamic and expanding team in a fast-paced and challenging environment This unique position offers you the chance to collaborate with the Business team to deliver a comprehensive perspective. As a Model Risk Program Analyst within the Model Risk Governance and Review Group (MRGR), your responsibilities include developing model risk policy and control procedures, conducting model validation activities, offering guidance on appropriate model usage in the business context, evaluating ongoing model performance testing, and ensuring that model users understand the strengths and limitations of the models. This role also presents attractive career paths for individuals involved in model development and validation, allowing them to work closely with Model Developers, Model Users, Risk and Finance professionals. Your key responsibilities will involve engaging in new model validation activities for all Data Science models in the coverage area. This includes evaluating the model's conceptual soundness, assumptions, reliability of inputs, testing completeness, numerical robustness, and performance metrics. You will also be responsible for conducting independent testing and additional model review activities, liaising with various stakeholders to provide oversight and guidance on model usage, controls, and performance assessment. To excel in this role, you should possess strong quantitative and analytical skills, preferably with a degree in a quantitative discipline such as Computer Science, Statistics, Data Science, Math, Economics, or Math Finance. A Master's or PhD degree is desirable. Additionally, you should have a solid understanding of Machine Learning and Data Science theory, techniques, and tools, including Python programming proficiency and experience with machine learning libraries such as Numpy, Scipy, Scikit-learn, TensorFlow, and PyTorch. Prior experience in Data Science, Quantitative Model Development, Model Validation, or Technology focused on Data Science, along with excellent writing and communication skills, will be advantageous. A risk and control mindset, with the ability to ask incisive questions, assess materiality, and escalate issues, are also essential for this role. By staying updated on the latest developments in your coverage area, you will contribute to maintaining the model risk control apparatus of the bank and serve as a key point of contact within the organization. Join our team and be a part of shaping the future of model-related risk management decisions.,

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About The Role We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation ― enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.

Posted 1 week ago

Apply

3.0 - 5.0 years

7 - 14 Lacs

Hyderabad

Work from Office

Role & responsibilities - Strong knowledge in OOPs and creating custom python packages for serverless applications. Strong Knowledge in SQL querying. Hands on experience in AWS services like Lambda, EC2, EMR, S3 and Athena, Batch, Textract and Comprehend. Strong Expertise in extracting text, tables, logos from low quality scanned multipage pdfs (80 - 150 dpi) and Images. Good Understanding in Probability and Statistics concepts and ability to find hidden patterns, relevant insights from the data. Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Preferred candidate profile - 3+ years Hands on in Python and data science like tools pandas, NumPy, SciPy, matplotlib and strong exposure in regular expressions. 3+ years Hands on experience in Machine learning algorithms like SVM, CART, Bagging and Boosting algorithms, NLP based ML algorithms and Text mining. Hands on expertise to integrate multiple data sources in a streamlined pipeline.

Posted 1 week ago

Apply

1.0 - 2.0 years

0 Lacs

Chennai

Remote

Objective To drive strategic business growth by identifying new client opportunities, nurturing existing relationships, and promoting Scipy Technology's edtech services and client-based solutions. The Business Development Executive will play a key role in building a robust sales pipeline, aligning client needs with our offerings, and contributing to the long-term success of the organization. Qualities and Responsibilities Identify and engage potential clients through calls, emails, networking, and field visits. Present, promote, and pitch edtech solutions to clients based on their business needs. Manage the complete sales cycle - from lead generation to closing deals. Maintain strong client relationships through regular follow-ups and feedback. Coordinate with internal teams to ensure timely delivery and client satisfaction. Conduct market research to understand competitors, trends, and customer preferences. Prepare and present sales reports, forecasts, and performance analysis. Achieve monthly/quarterly sales targets set by the company. Attend industry events, seminars, or webinars for networking and branding. Qualifications and Skills Bachelor's degree in Business Administration, Marketing, or a related field (MBA preferred). 1-2 years of experience in business development, preferably in edtech or IT services. Excellent communication in Tamil and English. Ability to understand client needs and propose relevant solutions. Strong organizational and time management abilities. Self-motivated, target-driven, and able to work independently. Familiarity with CRM tools and MS Office Job Type: Full-time Benefits: Health insurance Paid sick time Provident Fund Work from home Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.

Posted 1 week ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies