Jobs
Interviews

957 Dataset Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description HCL Software is a division of HCL Technologies (HCL) that operates its primary software business. It develops, markets, sells, and supports over 20 product families in the areas of Customer Experience, Digital Solutions, Security & Automation and DevOps. Its mission is to drive ultimate customer success with their IT investments through relentless innovation of its products. As Data Scientist/Researchers you will be responsible for leveraging the established architecture disciplines to help ensure that business strategies align with powerful capabilities of AI/ML to achieve business objectives consistently and cost effectively. We are looking for experience of 10 to 15 years only. Please apply only if you match the experience level Location- Bangalore and Noida. Please share CV to monica_sharma@hcl-software.com with the below details: Total Experience- Current CTC- Expected CTC- Notice Period- Main Responsibilities Propose solutions and strategies to tackle business challenges in AI/ML product Operationalizing and architecting ML/AI solutions Qualifications, Education and Experience: Phd, Masters or Bachelor’s degree in Computer Science, Electrical Engineering, Statistics, Mathematics with at least 10+ years in ML/AI Ability to prototype statistical analysis and modeling algorithms and apply these algorithms for data driven solutions to problems in new domains. Proven ability to rationalize disparate data sources and the ability to intuit the large picture within a dataset Experience in solving client's analytics problems and effectively communicating results and methodologies Software development skills in one or more scripting language preferably python, and common ML tools (Weka, R, RapidMiner, KNIME, scikit, AzureML, Sagemaker, ModelBuilder etc.) Familiarity with ML tools and packages like OpenNLP, Caffe, TensorFlow etc. Knowledge of MLOps pipeline Knowledge of machine learning and data mining techniques in one or more areas of statistical modeling methods Anomaly detection Regression, classification, clustering Deep learning Survival analysis Similarity and recommendation Forecasting Strong oral and written communication skills, including presentation skills Strong problem solving and troubleshooting skills

Posted 1 month ago

Apply

1.0 - 4.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

Position Overview: We are seeking a talented and motivated AI Engineer with a strong focus on Computer Vision to join our team. The ideal candidate should have a passion for solving complex problems in the field of machine learning and computer vision, as well as the technical expertise required to build and deploy AI solutions. This role is perfect for individuals with 1-4 years of relevant experience, strong programming skills in Python, and a solid understanding of computer vision frameworks and libraries. Key Responsibilities: - Design, develop, and deploy computer vision models to solve real-world problems. - Implement machine learning algorithms for image processing, object detection, segmentation, and recognition tasks. - Optimize and fine-tune models using frameworks like PyTorch and TensorFlow. - Collaborate with cross-functional teams to understand requirements and deliver efficient AI-driven solutions. - Analyze large-scale datasets to derive meaningful insights and improve model performance. - Create and maintain robust, scalable code for production-level AI systems. - Research and stay updated with the latest trends and advancements in computer vision and AI. Required Skills and Qualifications: - Educational Background: Bachelor's or Master’s degree in Computer Science, Data Science, Mathematics, or a related field. - Experience: 1-4 years of hands-on experience in computer vision and machine learning. - Programming Skills: Proficient in Python, with a strong understanding of object-oriented programming and scripting. - Frameworks: Expertise in using PyTorch and TensorFlow for model development and training. - Libraries: Proficient in popular Python libraries such as: - OpenCV-python for computer vision tasks. - Pillow for image processing. - scikit-learn for machine learning and data preprocessing. - Mathematics: Strong foundation in mathematical concepts related to machine learning and computer vision, including: - Linear algebra - Probability and statistics - Calculus - Computer Vision Algorithms: Practical experience in implementing and working with algorithms like: - Object detection - Image classification - Object segmentation - Activity recognition (optional but preferred) - Keypoint detection (optional but preferred) - Familiarity with image annotation tools and dataset preparation techniques. - Experience in version control systems like Git. Preferred Qualifications: - Knowledge of deploying models on edge devices or cloud platforms. - Familiarity with deep learning architectures like CNNs, RNNs, or GANs. - Experience with optimization techniques for model inference speed and accuracy. - Contributions to open-source projects in computer vision or machine learning. If you are passionate about applying AI to solve challenging computer vision problems and have the skills required to excel in this role, we encourage you to apply. Join us in building the next generation of intelligent systems!

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Gartner IT Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About The Role Gartner is looking for a well-rounded and driven leader to become a part of its Conferences Technology & Insight Analytics team, which is tasked with creating the reporting and analytics to support its Conference reporting operations. What You Will Do Provide technical leadership and guidance to software development teams, guaranteeing alignment with project objectives and adherence to industry best practices. Leading and mentoring a team of software engineers, delegating responsibilities, offering support, and promoting a collaborative environment. Collaborate with business stakeholders, design, build advanced analytic solutions for Gartner Conference Technology Business. Execution of our data strategy through design and development of Data platforms to deliver Reporting, BI and Advanced Analytics solutions. Design and development of key analytics capabilities using MS SQL Server, Azure SQL Managed Instance, T-SQL & ADF on Azure Platform. Consistently improving and optimizing T-SQL performance across the entire analytics platform. Create, build, and implement comprehensive data integration solutions utilizing Azure Data Factory. Analysing and solving complex business problems, breaking down the work into actionable tasks. Develop, maintain, and document data dictionary and data flow diagrams Responsible for building and enhancing the regression test suite to monitor nightly ETL jobs and identify data issues. Work alongside project managers, cross teams to support fast paced Agile/Scrum environment. Build Optimized solutions and designs to handle Big Data. Follow coding standards, build appropriate unit tests, integration tests, deployment scripts and review project artifacts created by peers. Contribute to overall growth by suggesting improvements to the existing software architecture or introducing new technologies. What You Will Need Strong IT professional with high-end knowledge on Designing and Development of E2E BI & Analytics projects in a global enterprise environment. The candidate should have strong qualitative and quantitative problem-solving abilities and is expected to yield ownership and accountability. Must Have Strong experience with SQL, including diagnosing and resolving load failures, constructing hierarchical queries, and efficiently analysing existing SQL code to identify and resolve issues, using Microsoft Azure SQL Database, SQL Server, and Azure SQL Managed Instance. Ability to create and modifying various database objects such as stored procedures, views, tables, triggers, indexes using Microsoft Azure SQL Database, SQL Server, Azure SQL Managed Instance. Deep understanding in writing Advance SQL code (Analytic functions). Strong technical experience with Database performance and tuning, troubleshooting and query optimization. Strong technical experience with Azure Data Factory on Azure Platform. Create and manage complex ETL pipelines to extract, transform, and load data from various sources using Azure Data Factory. Monitor and troubleshoot data pipeline issues to ensure data integrity and availability. Enhance data workflows to improve performance, scalability, and cost-effectiveness. Establish best practices for data governance and security within data pipelines. Experience in Cloud Platforms, Azure technologies like Azure Analysis Services, Azure Blob Storage, Azure Data Lake, Azure Delta Lake etc Experience with data modelling, database design, and data warehousing concepts and Data Lake. Ensure thorough documentation of data processes, configurations, and operational procedures. Good To Have Experience working with dataset ingestion, data model creation, reports, dashboards using Power BI. Experience with Python and Azure Function for data processing. Experience in other reporting tools like SSRS, Tableau, Power BI etc. Demonstrated Ability to use GIT, Jenkins and other change management tools. Good knowledge of database performance and tuning, troubleshooting and query optimization. Who You Are Graduate/Post-graduate in BE/Btech, ME/MTech or MCA is preferred. IT Professional with 7-10 yrs of experience in Data analytics, Cloud technologies and ETL development. Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Strong desire to improve upon their skills in software development, frameworks, and technologies. Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101327 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra

On-site

DataPune Posted On 02 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world’s largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc’s Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. About The Role: Our product offers insights on E-commerce companies and is growing rapidly. We are seeking a Data QA Associate to join our team in India to manage a large portion of the data cleaning and quality assurance processes for this product. As a Data QA Associate, you will be responsible for transforming the receipt data into accurate business expense insights. You will work with a top notch engineering team and data team from the U.S team and China team. You'll complete your work primarily using data in our proprietary software. This position offers the opportunity to meaningfully contribute to the growth of our company. Exceptional employees may have the opportunity to be promoted and manage a team of other Data QA Specialists. This is a fully remote role based in India. Working hours: 8 am - 5 pm IST In general, we expect some overlap with Chinese or U.S. work hours. More details on work-hour expectations will be shared during the recruiting process. We expect hires to start in the position as soon as possible, and no later than June, 2025 As our Data QA associate, you will: Perform accurate and efficient data extraction, labeling, and cleaning from email panel or AI-generated dataset to ensure high-quality data. Monitor and analyze data trends for various merchants and vendors, including email categorization and trend identification. Maintain the stability of merchant and vendor data, investigating anomalies such as sudden drops or spikes in data volume. Collaborate with teams in both China and the U.S. to ensure data consistency and quality. Provide timely responses to analysts and clients' demands and inquiries. Ensure compliance with company data security and confidentiality policies. You Are Likely To Succeed If you have: Bachelor's degree or above. Majors in Computer Science, Statistics, Business Analytics, or a related field are preferred; 0 - 2 years of experience as a data or quality assurance analyst; Experience in data tagging, cleaning or Regex Experience managing multiple processes in parallel Basic data analysis skills and familiarity with Excel, SQL, or Python are a plus. Exceptional attention to detail, problem-solving and critical thinking abilities Ability to work independently while coordinating with a remote team. Excellent written and verbal communication skills in English, with the ability to interact effectively with vendors and internal teams across time zones and cultures What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer.

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Company Description SoftSensor.ai LLC is a data-&-AI innovation studio twice recognized in the DataIQ 100 for Most Innovative Solutions and Best Team to Work With (Vendor Side) . Alongside bespoke AI consulting, we operate PRR.AI —our flagship Medical AI platform that combines multimodal AI models with a human-in-the-loop workflow for high-volume medical and life-science data. PRR.AI powers: Digital Pathology & Radiology – whole-slide or DICOM image ingestion, smart region-of-interest detection, stain normalization, and AI-assisted tumor grading. Clinical Document Validation – OCR + LLM pipelines for batch-manufacturing records (BMRs), clinical notes, and regulatory dossiers. Multimodal Research Datasets – embryo-quality prediction, IVF outcome modeling, and vision-language damage detection for medical devices & vehicles. SoftSensor + PRR.AI give clinicians, data scientists, and regulators a shared workspace where AI predictions are auditable, explainable, and continuously improved. Our culture blends clinical depth , data-science rigor , and product craftsmanship with flexible remote work hubs in Bengaluru, Hyderabad, and Gurgaon. Role Overview You will be a clinical domain expert inside our AI product pipeline, splitting time between AI consulting projects and PRR.AI platform operations. Key Responsibilities Clinical Data Curation Use the PRR.AI interface to label or validate entities across slides, radiographs, lab reports, and BMR pages. Define new ontologies and QA rules that drive PRR.AI’s auto-validation engine. Model Evaluation & Feedback Loops Run test sets through vision-language models (e.g., LLaMA Vision, GPT-4o) and document false-positives/negatives inside PRR.AI’s error-tracking boards. Recommend prompt tweaks and fine-tuning strategies to ML engineers. Knowledge Engineering & Prompt Design Translate clinical SOPs and regulatory guidelines into structured prompts or rule-based checks used by PRR.AI’s “Guardrails” module to flag contraindications or process deviations. Stakeholder Collaboration Act as clinical liaison for pharma clients adopting PRR.AI in GxP settings; conduct onboarding webinars and draft medical validation reports. Partner with product & UX teams to refine the PRR.AI reviewer workflow (hot-keys, bulk actions, keyboard navigation) and minimize cognitive load. Regulatory & Quality Assurance Maintain audit-ready documentation (dataset lineage, annotation protocols, validation metrics) to support ISO 13485 / GAMP 5 / CDSCO-NDCT & FDA 21 CFR Part 11 compliance. Minimum Qualifications MBBS from an NMC-recognized institution. MBBS + MBA is a bonus point 0–3 years internship, clinical practice, or health-informatics experience. Strong grasp of medical terminology, ICD-10/11, drug classes, basic biostatistics. Clear English communication and ability to explain clinical nuances to non-medical peers. Awareness of HIPAA, GDPR, DPDP India, and data-integrity principles (ALCOA+). What We Offer Competitive pay (fixed salary for FTE; hourly rates for part-time) + Variable Comp Group medical insurance Sponsored upskilling—Coursera “AI in Healthcare”, Azure Health Data Services, etc. Direct impact on products used by pathologists, IVF specialists, and pharma QA teams worldwide. How to Apply Email your CV (PDF) and a short note answering: A clinical workflow you believe AI + human review (like PRR.AI) could transform. A dataset or paper that excites you and why. Send to hr@softsensor.ai with the subject: “Medical AI Analyst – [Full-time/Part-time] – Your Name” Rolling interviews begin 15 July 2025 . Positions remain open until filled.

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Power BI Administrator Location: [Remote] Job Type: [Full-Time] What You Will Be Doing As a Power BI Administrator, you will be responsible for the full lifecycle administration of the Power BI platform across the enterprise. This includes planning, implementing, supporting, and optimizing the Power BI environment to ensure high performance, scalability, and usability. Your role will be critical in supporting both developers and business stakeholders to deliver reliable and secure analytics solutions. Key Responsibilities: Administer and maintain Power BI services including patches, updates, user/group security (Active Directory), account provisioning, and capacity management. Configure and manage data gateways to enable secure access to on-prem and cloud data sources. Set up and monitor Gateway Virtual Machines and manage service principal creation and access. Automate environment configurations, deployments, and integrations using Power BI REST APIs and PowerShell scripting. Manage and enforce Data Loss Prevention (DLP) policies at both tenant and environment levels. Support license management and workspace migrations across Microsoft 365 and Power BI Premium capacities. Provide documentation and maintain administrative best practices, procedures, and standards. Monitor system performance, usage analytics, and implement proactive alerting and logging mechanisms. Offer production and user support, including participation in a rotational on-call schedule. Troubleshoot issues related to Azure AD, networking, and firewall configurations. Engage in continuous improvement by proposing innovations and enhancements in the Power BI landscape. What You Will Bring to the Role Education: Bachelor’s Degree in Computer Science, Information Technology, Engineering, or a related discipline. Experience: 7+ years of experience in Business Intelligence or Information Management. 3+ years of dedicated experience in Power BI Administration. Strong knowledge of Power BI admin portals and environment management. Experience with Power BI Workspace, dataset, and report migration. Familiarity with Azure services and Microsoft 365 administration. Technical Skills: Proficiency in PowerShell scripting. Hands-on experience with Power BI REST APIs. Understanding of networking, firewall, and AD integration within Azure environments. Competency in optimizing Power BI for performance, governance, and scalability.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Greater Bengaluru Area

On-site

What if the work you did every day could impact the lives of people you know? Or all of humanity? At Illumina, we are expanding access to genomic technology to realize health equity for billions of people around the world. Our efforts enable life-changing discoveries that are transforming human health through the early detection and diagnosis of diseases and new treatment options for patients. Working at Illumina means being part of something bigger than yourself. Every person, in every role, has the opportunity to make a difference. Surrounded by extraordinary people, inspiring leaders, and world changing projects, you will do more and become more than you ever thought possible. We are seeking a PLM Analyst to be part of our talented team in Bangalore. This position will be an integral part of running core operations for the Product Lifecycle of a dynamic, fast paced organization for both new product development and on market commercial operations. The person in this role is responsible for creating and submitting change control, while serving as a critical resource in the change management process from inception through end-of-life for Illumina products. Maintains product and process configurations in PLM and SAP ERP. Supports cross-functional teams on the creation of Change Requests and Change Orders and assures configuration and document changes include required information while resolving any issues that arise. Scope Of Responsibility Applies problem-solving skills to analyze scope of Change and the underlying business dataset (e.g., Items, Documents, Bill of Materials, Facilities, EH&S) Packages Change scope in the form of Change Request and Change Orders – in the most efficient manner, in order to bring efficiencies to scale Prioritizes processing Changes in full alignment with the defined Service Level and expected metrics (e.g., turnaround time and quality service level) Performs thorough data analysis in light of the Change scope, in order to achieve higher accuracy level of impacted items. Scope includes, but not limited to Item and Document search – by key attributes, and descriptions both within PLM and PLM ecosystem e.g., SAP, Camstar, LIMS etc. Verifies accuracy and completeness of Changes packages by other Change Originators – where necessary, in full conformance with the underlying procedures, work instructions or job aids. Performs data quality review while processing Change workflows. Review risk towards data integrity, check for data completeness and accuracy while advancing PLM workflows through lifecycle stages Experience Required 0-2 years of prior professional experience in the PLM space of a MedTech company with working knowledge of Enterprise Change Management, Master Data Management and Enterprise Document Control Well versed with basic GMP, regulatory and compliance requirements of a MedTech company e.g., 21 CFR 820 (Quality System Regulation), 21 CFR Part 11 (Electronic Records and Electronic Signatures) and 21 CFR Part 809 (In-Vitro Diagnostic Products) Prior experience of Data Stewards role processing Item and Document Master Data in a controlled setup is preferred We are a company deeply rooted in belonging, promoting an inclusive environment where employees feel valued and empowered to contribute to our mission. Built on a strong foundation, Illumina has always prioritized openness, collaboration, and seeking alternative perspectives to propel innovation in genomics. We are proud to confirm a zero-net gap in pay, regardless of gender, ethnicity, or race. We also have several Employee Resource Groups (ERG) that deliver career development experiences, increase cultural awareness, and offer opportunities to engage in social responsibility. We are proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, color, gender, religion, marital status, domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition, sexual orientation, pregnancy, military or veteran status, citizenship status, and genetic information. Illumina conducts background checks on applicants for whom a conditional offer of employment has been made. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable local, state, and federal laws. Background check results may potentially result in the withdrawal of a conditional offer of employment. The background check process and any decisions made as a result shall be made in accordance with all applicable local, state, and federal laws. Illumina prohibits the use of generative artificial intelligence (AI) in the application and interview process. If you require accommodation to complete the application or interview process, please contact accommodations@illumina.com. To learn more, visit: https://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf. The position will be posted until a final candidate is selected or the requisition has a sufficient number of qualified applicants. This role is not eligible for visa sponsorship.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. About The Role: Our product offers insights on E-commerce companies and is growing rapidly. We are seeking a Data QA Associate to join our team in India to manage a large portion of the data cleaning and quality assurance processes for this product. As a Data QA Associate, you will be responsible for transforming the receipt data into accurate business expense insights. You will work with a top notch engineering team and data team from the U.S team and China team. You'll complete your work primarily using data in our proprietary software. This position offers the opportunity to meaningfully contribute to the growth of our company. Exceptional employees may have the opportunity to be promoted and manage a team of other Data QA Specialists. This is a fully remote role based in India. Working hours: 8 am - 5 pm IST In general, we expect some overlap with Chinese or U.S. work hours. More details on work-hour expectations will be shared during the recruiting process. We expect hires to start in the position as soon as possible, and no later than June, 2025 As our Data QA associate, you will: Perform accurate and efficient data extraction, labeling, and cleaning from email panel or AI-generated dataset to ensure high-quality data. Monitor and analyze data trends for various merchants and vendors, including email categorization and trend identification. Maintain the stability of merchant and vendor data, investigating anomalies such as sudden drops or spikes in data volume. Collaborate with teams in both China and the U.S. to ensure data consistency and quality. Provide timely responses to analysts and clients' demands and inquiries. Ensure compliance with company data security and confidentiality policies. You Are Likely To Succeed If you have: Bachelor's degree or above. Majors in Computer Science, Statistics, Business Analytics, or a related field are preferred; 0 - 2 years of experience as a data or quality assurance analyst; Experience in data tagging, cleaning or Regex Experience managing multiple processes in parallel Basic data analysis skills and familiarity with Excel, SQL, or Python are a plus. Exceptional attention to detail, problem-solving and critical thinking abilities Ability to work independently while coordinating with a remote team. Excellent written and verbal communication skills in English, with the ability to interact effectively with vendors and internal teams across time zones and cultures What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Delhi, India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. About The Role: Our product offers insights on E-commerce companies and is growing rapidly. We are seeking a Data QA Associate to join our team in India to manage a large portion of the data cleaning and quality assurance processes for this product. As a Data QA Associate, you will be responsible for transforming the receipt data into accurate business expense insights. You will work with a top notch engineering team and data team from the U.S team and China team. You'll complete your work primarily using data in our proprietary software. This position offers the opportunity to meaningfully contribute to the growth of our company. Exceptional employees may have the opportunity to be promoted and manage a team of other Data QA Specialists. This is a fully remote role based in India. Working hours: 8 am - 5 pm IST In general, we expect some overlap with Chinese or U.S. work hours. More details on work-hour expectations will be shared during the recruiting process. We expect hires to start in the position as soon as possible, and no later than June, 2025 As our Data QA associate, you will: Perform accurate and efficient data extraction, labeling, and cleaning from email panel or AI-generated dataset to ensure high-quality data. Monitor and analyze data trends for various merchants and vendors, including email categorization and trend identification. Maintain the stability of merchant and vendor data, investigating anomalies such as sudden drops or spikes in data volume. Collaborate with teams in both China and the U.S. to ensure data consistency and quality. Provide timely responses to analysts and clients' demands and inquiries. Ensure compliance with company data security and confidentiality policies. You Are Likely To Succeed If you have: Bachelor's degree or above. Majors in Computer Science, Statistics, Business Analytics, or a related field are preferred; 0 - 2 years of experience as a data or quality assurance analyst; Experience in data tagging, cleaning or Regex Experience managing multiple processes in parallel Basic data analysis skills and familiarity with Excel, SQL, or Python are a plus. Exceptional attention to detail, problem-solving and critical thinking abilities Ability to work independently while coordinating with a remote team. Excellent written and verbal communication skills in English, with the ability to interact effectively with vendors and internal teams across time zones and cultures What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Job Title : Python Backend Developer - FastAPI, PostgreSQL, Pattern Recognition Location : Remote / Hybrid Type: Full-time Experience: 3 to 7 Years Compensation: USDT only | Based on skill and performance About Us: We are a cutting-edge fintech startup building an Al-powered trading intelligence platform that integrates technical analysis, machine learning, and real-time data processing. Our systems analyze financial markets using custom algorithms to detect patterns, backtest strategies, and deploy automated insights. We're seeking a skilled Python Backend Developer experienced in FastAPI, PostgreSQL, pattern recognition, and financial data workflows. Key Responsibilities Implement detection systems for chart patterns, candlestick patterns, and technical indicators (e.g.,RSI, MACD, EMA) Build and scale high-performance REST APIs using FastAPI for real-time analytics and model communication Develop semi-automated pipelines to label financial datasets for supervised/unsupervised ML models Implement and maintain backtesting engines for trading strategies using Python and custom simulation logic Design and maintain efficient PostgreSQL schemas for storing candle data, trades, indicators, and model metadata (Optional) Contribute to frontend integration using Next.js/React for analytics dashboards and visualizations Key Requirements Python (3-7 years): Strong programming fundamentals, algorithmic thinking, and deep Python ecosystem knowledge FastAPI: Proven experience building scalable REST APIs PostgreSQL: Schema design, indexing, complex queries, and performance optimization Pattern Recognition: Experience in chart/candlestick detection, TA-Lib, rule-based or ML-based identification systems Technical Indicators: Familiarity with RSI, Bollinger Bands, Moving Averages, and other common indicators Backtesting Frameworks: Hands-on experience with custom backtesting engines or libraries like Backtrader, PyAlgoTrade Data Handling: Proficiency in NumPy, Pandas, and dataset preprocessing/labeling techniques Version Control: Git/GitHub - comfortable with collaborative workflows Bonus Skills Experience in building dashboards with Next.js / React Familiarity with Docker, Celery, Redis, Plotly, or TradingView Charting Library Previous work with financial datasets or real-world trading systems Exposure to Al/ML model training, SHAP/LIME explainability tools, or reinforcement learning strategies Ideal Candidate Passionate about financial markets and algorithmic trading systems Thrives in fast-paced, iterative development environments Strong debugging, data validation, and model accuracy improvement skills Collaborative mindset - able to work closely with quants, frontend developers, and ML engineers What You'll Get Opportunity to work on next-gen fintech systems with real trading applications Exposure to advanced AI/ML models and live market environments Competitive salary + performance-based bonuses Flexible working hours in a remote-first team

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Company Description We are looking for a Data Analyst with 3+ years of experience in ** BlueOptima is on a mission to maximize the economic and social value that software engineering organizations are capable of delivering. Our vision is to become the global reference for the optimization of the performance of Software Engineering. Our technology is used by some of the world’s largest organizations, including nine of the world’s top twelve Universal Banks, and a number of large corporates. We are a global organization with headquarters in London and additional offices in India, Mexico, and the US. We are made up of 100+ individuals from more than 20 different countries. We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment. Location: Bangalore Department: Data Engineering Job Description Job summary: Our ground-breaking technology is built on top of Billions of data points that are representative of a developer’s interaction with Source Code and Task Tracking systems. The enormous amount of data BlueOptima processes daily requires specialists to dive into the dataset, and identify insights from the data points and device solutions to extend and enhance BlueOptima’s product suite. We are looking for talented data analysts who are critical of data and curious to determine the story it narrates, explore vast datasets and are aptly able to use any and all tools available at their disposal to interrogate the data. A successful candidate will turn data into information, information into insight, and insight into valuable product features. Responsibilities and tasks : Collaborate with the marketing team to produce impactful technical whitepapers by conducting thorough data collection and analysis and contributing to content development. Partner with the Machine Learning and Data Engineering team to develop and implement innovative solutions for our Developer Analytics and Team Lead Dashboard products. Provide insightful data analysis and build actionable dashboards to empower data-driven decision-making across business teams (Sales, Customer Success, Marketing). Deliver compelling data visualizations and reports using tools like Tableau and Grafana to communicate key insights to internal and external stakeholders. Identify and implement opportunities to automate data analysis, reporting, and dashboard creation processes to improve efficiency. Qualifications Technical Must have a minimum of 3 years of relevant Work Experience in Data Science, Data Analytics, or Business Intelligence Demonstrate advanced SQL expertise, including performance tuning (indexing, query optimization) and complex data transformation (window functions, CTEs) for extracting insights from large datasets Demonstrate intermediate-level Python skills for data analysis, including statistical modeling, machine learning (scikit-learn, statsmodels), and creating impactful visualizations, with a focus on writing well-documented, reusable code Possess strong data visualization skills with proficiency in at least one enterprise-level tool (e.g., Tableau, Grafana, Power BI), including dashboard creation, interactive visualizations, and data storytelling. Behavioral Good communication and is able to express ideas clearly and in a thoughtful manner Comes up with a range of possible directions to analyse data when presented with ill-defined / open-ended problem statements Provides rationale to each analytical direction with pros and cons without any support Showcases lateral thinking ability by approaching a problem in creative directions Demonstrate strong analytical project management skills, with the ability to break down complex data analysis initiatives into well-defined phases (planning, data acquisition, EDA, modeling, visualization, communication), ensuring efficient execution and impactful outcomes Your career progression: In BlueOptima, we strive to strengthen your skills, widen your scope of work and develop your career fast. For this role, you can expect to become more autonomous and start working on your own individual projects. This will also lead to supporting or managing a specific area of metrics for the business (e.g. revenue metrics) and potentially growth to a mentor or Team Lead position. Additional Information Why join our team? Culture and Growth: Global team with a creative, innovative and welcoming mindset. Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success. Freedom to create your own success story in a high performance environment. Training programs and Personal Development Plans for each employee Benefits: 33 days of holidays - 18 Annual leave + 7 sick leaves + 8 public and religious holidays Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed Private Medical Insurance provided by the company Gratuity payments Claim Mobile/Internet expenses and Professional Development costs Leave Travel Allowance Flexible Work from Home policy - 2 days home p/w Free drinks and snacks in the office International travel opportunities Global annual meet up (most recent meetups have been held in Thailand and Cancun) High quality equipment (Ergonomic chairs and 32’ screens) Stay connected with us on LinkedIn or keep an eye on our career page for future opportunities!

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role KarbonWise is on a mission to simplify carbon accounting and ESG reporting for forward-thinking businesses. We’re looking for an entry-level team member to support our product and GTM teams with custom data analysis, demo preparation, and template creation. This is an ideal role for someone who is analytical, detail-oriented, and excited about climate tech . You’ll help make our product more powerful and easier to use — one dataset, benchmark, and template at a time. What You’ll Do Prepare client-specific demo datasets that reflect realistic scenarios Build and maintain benchmarking datasets (e.g., sectoral emissions, GHG factors) Create and improve Excel/Google Sheets templates used in the product Work with the product team to improve demo usability and ease of data entry Support custom requests for dashboards, mock data, or tool enhancements Research ESG-related data sources and help translate them into usable product insights What We’re Looking For Freshers and recent grads welcome — especially from backgrounds in engineering, economics, environmental sciences, or business Strong skills in Excel/Google Sheets (basic formulas, lookups, tables) Comfortable working with numbers, data, and structure Interest in sustainability, ESG, or climate tech Attention to detail, willingness to learn, and proactive problem-solving Bonus: Familiarity with PowerPoint or Canva for client-facing materials Why Join Us? Work in a fast-growing sustainability tech company Learn from cross-functional teams across product, GTM, and sustainability Build a strong foundation in ESG data, carbon reporting, and product thinking Be part of a mission-driven team tackling real climate challenges PREREQUISITE EXERCISE As a pre-requisite, pls submit this exercise to divya.sukumar@karbonwise.com Imagine KarbonWise is building a benchmarking dataset for energy intensity (kWh/m²/year) in commercial real estate. List 3 segments you would want to include (e.g., malls, offices, warehouses) What kind of data points would you collect for each? Suggest 2–3 sources or ways you might find this data (even if not exact) If you had to make assumptions to fill gaps, how would you document them?

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

About Ellucian Ellucian is a global market leader in education technology. We power innovation for higher education, partnering with more than 2,900 customers across 50 countries and serving over 20 million students. Ellucian's AI-powered platform, trained on the richest dataset available in higher education, drives efficiency, personalized experiences, and strengthened engagement for all students, faculty and staff. Fueled by decades of experience with a singular focus on the unique needs of learning institutions, the Ellucian platform features best-in-class SaaS capabilities and delivers insights needed now and into the future. These solutions and services span the entire student lifecycle, from student recruitment, enrollment, and retention to workforce analytics, fundraising, and alumni engagement. Ellucian's innovative solutions, vast ecosystem of partners, and user community of more than 45,000 provide best practices leading to greater institutional success and achieving better student outcomes. Values Rooted in Purpose We embrace the power to lead , the courage to innovate , and the determination to grow . At our core, we believe in humanizing our approach, recognizing that our people are our greatest strength. With a shared vision of transformation , we endeavor to shape a brighter future for higher education. About The Opportunity Ellucian is seeking to hire a Senior SAP Business Analyst to join within their Information Technology department. As a Senior Analyst, you will be responsible for managing the Order to Cash processes using SAP SD module on ECC platform and be part of the transformation initiative to migrate to SAP S/4 HANA solution. This role will also focus on bridging the gap between business operations and IT to drive technological solutions that optimize financial and operational processes in our software business. Where you will make an impact Configure, implement and maintain SAP S/4 HANA SD/Subscription Billing module for Order to Cash (OTC) processes, including Sales Order Management, Billing, Delivery Processing, Revenue Recognition, Pricing, and Credit Management. Expertise in configuring SAP RAR (Revenue Accounting and Reporting) module to ensure accurate revenue recognition, especially in complex scenarios. Support migration activities from SAP ECC SD to S/4 HANA SD, including data migration, validation, and testing. Coordinate user acceptance testing (UAT) and support business users in testing. Support end-to-end integration flows, train end-users and provide post-implementation support, ensuring a smooth transition to new systems. Problem-solving skills and the ability to provide day-to-day support in production environments adhering to the SLAs. Provide ongoing support to the business users on day-to-day issues related to the Order to Cash process in SAP ECC SD and RAR module. Troubleshoot and resolve issues across SD modules with a focus on Revenue Recognition (RAR), invoicing, and sales order management. What You Will Bring 5 years of hands-on experience in SAP ECC and S/4 HANA SD module, with focus on the Order to Cash (OTC) processes with at least 2 full implementation cycles. Strong knowledge of SD module configuration, including Sales Order Management, Billing, Revenue Recognition, Pricing, Credit Management, Delivery Processing, and Invoicing. Experience with RAR (Revenue Accounting and Reporting), including configuration and troubleshooting. Good experience in SAP S/4 HANA SD migration, including configuration and post-migration support. A good understanding of integration points between SD and FI Strong communication skills, with the ability to work with both technical teams and business stakeholders. Ability to work in a team-oriented environment and manage multiple priorities effectively. What makes #Ellucianlife 22 days annual leave plus 11 public holidays Competitive gratuity policy Group insurance and Annual health checkup plan with a variety of family and wellness benefits. Thrive Flex Lifestyle Account (LSA) that allows you to contribute towards your health, financial or learning interests 5 charitable days to support the community that supports us Wellness Headspace (mental health) Wellbeats (virtual fitness classes) RethinkCare - caregiver support Diversity and inclusion programs that promote employee resource groups such as: Buzzinga and Lean In Team to name a few. Parental leave Employee referral bonuses to encourage the addition of great new people to the team We Foster a learning culture with: Education Assistance Program Professional development opportunities

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

Full Time Role at EssentiallySports for Growth Product Manager About Us : EssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a top 10 sports media platform in the U.S, we generate over a billion pageviews a year and engage more than 30 million monthly active users. This massive organic reach fuels our data-driven culture, where deep audience insights meet cultural trends to serve fandom where it lives—and where it’s going next. With zero CAC, we’ve built owned audiences at scale, making organic growth a core part of our identity. The next phase of ES growth is around newsletter initiative, in less than 9 months, we’ve built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics: 5 newsletter brands 700k+ subscribers Open rates of 40%-46%. Our Values at ES Focus on the user and all else will follow Hire for Intent and not for Experience Bootstrapping gives you the freedom to serve the customer and the team instead of investor Internet and Technology untap the niches Action oriented , integrity, freedom, strong communicators, and responsibility All things equal, one with high agency wins Role Overview : EssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact. This is not a traditional marketing role—your job is to engineer growth through product innovation, user journey optimization, and experimentation. You’ll be the bridge between editorial, tech, and analytics—turning insights into actions that drive sustainable audience and revenue growth. Key Responsibilities Own the entire web user journey from page discovery to conversion to retention. Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior. Optimize high-traffic areas of the site—landing pages, article CTAs, newsletter modules—for conversion and time-on-page. Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows. Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users. Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops. Monitor analytics pipelines from GA4 → Athena → dashboards to derive insights and drive decision-making. Introduce AI-driven features (LLM prompts, content auto-summaries, etc.) that personalize or simplify the user experience. Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities. Who you are 4+ years of experience in product growth, web engagement, or analytics-heavy roles. Deep understanding of web traffic behavior , engagement funnels, bounce/exit analysis, and retention loops. Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics. Strong proficiency in SQL, GA4, marketing analytics, and campaign management Understand customer segmentation, LTV analysis, cohort behavior, and user funnel optimization Thrive in ambiguity and love building things from scratch Passionate about AI, automation, and building sustainable growth engines Thinks like a founder: drives initiatives independently, hunts for insights, moves fast A team player who collaborates across engineering, growth, and editorial teams. Proactive and solution-oriented, always spotting opportunities for real growth. Thrive in a fast-moving environment, taking ownership and driving impact. “Very flexible and open, can ask anything anytime. Real mentorship and coaching. Superb energy in whole ES team.” - when team was asked how would you describe ES to a friend after a round of beers What do you get? Fully Remote Job Flexible Working Hours Freedom to own the Problem Statements, and make your own solutions Play with the biggest dataset, as a function of working in a media org. Work directly with the founding team Release features at scale to Millions of Users on Day 1 Freedom to work on multiple and new technologies Bi-annual offsites which are coined “epic” by the team! The permissionless growth team is small, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders/founders who have built and led multiple product, tech, data and design teams, and grown EssentiallySports to a global scale.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What You’ll Do Apply the knowledge of data characteristics and data supply pattern, develop rules and tracking process to support data quality model. Prepare data for analytical use by building data pipelines to gather data from multiple sources and systems. Integrate, consolidate, cleanse and structure data for use by our clients in our solutions. Perform design, creation, and interpretation of large and highly complex datasets. Stay up-to-date with the latest trends and advancements in GCP and related technologies, actively proposing and evaluating new solutions. Understand best practices for data management, maintenance, reporting and security and use that knowledge to implement improvements in our solutions. Implement security best practices in pipelines and infrastructure. Develop and implement data quality checks and troubleshoot data anomalies. Provide guidance and mentorship to junior data engineers. Review dataset implementations performed by junior data engineers. What Experience You Need BS degree in a STEM major or equivalent discipline; Master’s Degree strongly preferred 5+ years of experience as a data engineer or related role Cloud certification strongly preferred Advanced skills using programming languages such as Python or SQL and intermediate level experience with scripting languages Intermediate level understanding and experience with Google Cloud Platforms and overall cloud computing concepts, as well as basic knowledge of other cloud environments Experience building and maintaining moderately-complex data pipelines, troubleshooting issues, transforming and entering data into a data pipeline in order for the content to be digested and usable for future projects Experience designing and implementing moderately complex data models and experience enabling optimization to improve performance Demonstrates advanced Git usage and CI/CD integration skills What Could Set You Apart Experience in AI/ML engineering. Knowledgeable in cloud security and compliance. Experience mentoring engineers or leading training. Proficient with data visualization tools. Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : AI/ML Engineer Data Platform Location : Bangalore, Pune, India Employment Type : Full Time, Permanent Industry : Software Product Department : Engineering Software & QA Job Description We are looking for a talented and passionate AI/ML Engineer to join our growing development team in Gandhinagar. You will be responsible for building scalable and efficient machine learning pipelines and deploying intelligent solutions for real-world problems. The ideal candidate is hands-on with modern ML/DL frameworks and is comfortable working in a DevOps environment with containerized applications. Roles & Responsibilities Core Responsibilities : Architect and implement ML models using containerized infrastructure (Docker, Kubernetes) with CI/CD integration. Develop intelligent solutions for search engines, text classification, entity recognition, and generative tasks using LLMs. Build end-to-end pipelines : Dataset processing, Model training, Deployment, Prediction, and Retraining. Design APIs for ML operations : dataset preparation, training, inference, and monitoring. Optimize performance and accuracy of NLP models including NER, NLU, and BERT-based architectures. Collaborate with cross-functional teams to enhance products with intelligent features. Expected Skills & Experience Technical Proficiency : Strong hands-on experience in Python and ML frameworks : PyTorch, TensorFlow, SpaCy, SKLearn, RasaNLU. Solid understanding of NLP techniques, including NER, BERT, transformer models, and LLMs. Experience with FastAPI, Flask, or Django for building APIs. Experience working with databases like MariaDB, MongoDB. Familiarity with Unix/Linux systems and version control systems. DevOps & Deployment CI/CD experience with tools like Jenkins, Maven, etc. Proficient in Docker image creation, container orchestration (Kubernetes / Docker Swarm). Skilled in deploying ML models in production with monitoring and versioning. Search & Indexing Experience with Elasticsearch for building intelligent search engines. Bonus Skills (Preferred But Not Mandatory) Experience with Computer Vision, Azure DevOps, or Neo4j. Hands-on with HuggingFace models, Minio, Knative, ArgoCD/Argo Workflows. Experience in document parsing with libraries like Publaynet, PDFTron, PDFjs, PSPDFKit. Worked with Azure Cognitive Services (OCR, Entity Recognition). Knowledge of Azure Blob Storage or Amazon S3. Education UG : Any Graduate PG : Any Postgraduate (ref:hirist.tech)

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

🚀 About the Role We're looking for a relentlessly curious Data Engineer to join our team as a Marketing Platform Specialist. This role is for someone who will dedicate themselves to extracting every possible byte of data from marketing platforms - the kind of person who gets excited about discovering hidden API endpoints and undocumented features.. You'll craft pristine, high-resolution datasets in Snowflake that fuel advanced analytics and machine learning across the business. 🎯 The Mission Your singular focus: extract every drop of value from the world’s most powerful marketing platforms. Where others use 10% of a tool’s capabilities, you’ll uncover the hidden 90% — from granular auction insights to ephemeral algorithm data. You’ll build the intelligence layer that enables others to scale smarter, faster, and with precision. 🧪 What You’ll Actually Do Platform Data Extraction & Monitoring Reverse-engineer APIs across Meta, Google, TikTok, and others to extract hidden attribution data, auction signals, and algorithmic behavior Exploit beta features and undocumented endpoints to unlock advanced data streams Capture ephemeral data before it disappears — attribution snapshots, pacing drift, algorithm shifts Build real-time monitoring datasets to detect anomalies, pacing issues, and creative decay Scrape competitor websites and dissect tracking logic to reveal platform strategies Business Scaling & Optimization Datasets Build granular spend and performance datasets with dayparting, marginal ROAS, and cost efficiency metrics Develop lookalike audience models enriched with seed performance, overlap scores, and fatigue indicators Create auction intelligence datasets with hour/geo/placement granularity, bid behaviors, and overlap tracking Design optimization datasets for portfolio performance, campaign cannibalization, and creative lifecycle decay Extract machine learning signals from Meta Advantage+, Smart Bidding configs, and TikTok optimization logs Build competitive intelligence datasets with SOV trends, auction pressure, and creative benchmarks Advanced Feature & AI Data Engineering Extract structured data from advanced features like predictive analytics, customer match, and A/B testing tools Build multimodal datasets (ad copy, landing pages, video) ready for machine learning workflows Enrich historical marketing data using automated pipelines and AI-powered data cleaning Unified Customer & Attribution Data Build comprehensive GA4 datasets using precise tagging logic and event architecture Unify data from Firebase, Klaviyo, Tealium, and offline sources into full-funnel CDP datasets Engineer identity resolution pipelines using hashed emails, device IDs, and privacy-safe matching Map cross-device customer journeys with detailed attribution paths and timestamp precision Snowflake Architecture & Analytics Enablement Design and maintain scalable, semantic-layered Snowflake datasets with historical tracking and versioning Use S3 and data lakes to manage large-scale, unstructured data across channels Implement architectures suited for exploration, BI, real-time streaming, and ML modeling — including star schema, medallion, and Data Vault patterns Build dashboards and tools that reveal inefficiencies, scaling triggers, and creative performance decay 🎓 Skills & Experience Required 3+ years as a Data Engineer with deep experience in MarTech or growth/performance marketing Advanced Python for API extraction, automation, and orchestration JavaScript proficiency for tracking and GTM customization Expert-level experience with GA4 implementation and data handling Hands-on experience with Snowflake, including performance tuning and scaling Comfortable working with S3 and data lakes for semi/unstructured data Strong SQL and understanding of scalable data models 🤞Bonus Points if You Have Experience with dbt for transformation and modeling CI/CD pipelines using GitHub Actions or similar Knowledge of vector databases (e.g., Pinecone, Weaviate) for AI/ML readiness Familiarity with GPU computing for high-performance data workflows Data app development using R Shiny or Streamlit 🚀 You'll Excel Here If You Are: Relentlessly curious : You don’t wait for someone to show you the API docs; you find the endpoints yourself Detail-obsessed : You notice the subtle changes, the disappearing fields, the data drift others overlook Self-directed: You don’t need micromanagement. Just give you the mission, and you’ll reverse-engineer the map Comfortable with ambiguity: You can navigate undocumented features, partial datasets, and platform quirks without panic Great communicator: You explain technical decisions clearly, with just enough detail for analysts, marketers, and fellow engineers to move forward Product-minded: You think in terms of impact, not just pipelines. Every dataset you build is a stepping stone to better strategy, smarter automation, or faster growth 🔥Why the Conqueror: ⭐️Shape the data infrastructure powering real business growth 💡Join a purpose-driven, fast-moving team 📈 Work with fast-scaling e-commerce brands 🧠Personal development budget to continuously sharpen your skills 🏠Work remotely from anywhere 💼 3000 - 4200 euros Gross Salary/month 💛 About us We are a growing team of passionate, performance-driven individuals on a mission to be the best at growing multiple e-commerce businesses with great products. For the past 7 years, we’ve gathered a powerful community of over 1 million people globally, empowering people to build and sustain healthy habits in an enjoyable way. Our digital and physical products, the Conqueror Challenges App and epic medals, have helped users walk, run, cycle, swim, and move through the virtual equivalents of iconic routes. We’ve partnered with Warner Bros., Disney, and others to launch global hits like THE LORD OF THE RINGS™, HARRY POTTER™, and STAR WARS™ virtual challenges. Now, we’re stepping into an exciting new chapter! While continuing to grow our core business, we’re actively acquiring and building new e-commerce brands. We focus on using our marketing and operational strengths to scale these businesses, striving to always be outstanding in everything we do and delivering more value to more people.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. About The Role: Our product offers insights on E-commerce companies and is growing rapidly. We are seeking a Data QA Associate to join our team in India to manage a large portion of the data cleaning and quality assurance processes for this product. As a Data QA Associate, you will be responsible for transforming the receipt data into accurate business expense insights. You will work with a top notch engineering team and data team from the U.S team and China team. You'll complete your work primarily using data in our proprietary software. This position offers the opportunity to meaningfully contribute to the growth of our company. Exceptional employees may have the opportunity to be promoted and manage a team of other Data QA Specialists. This is a fully remote role based in India. Working hours: 8 am - 5 pm IST In general, we expect some overlap with Chinese or U.S. work hours. More details on work-hour expectations will be shared during the recruiting process. We expect hires to start in the position as soon as possible, and no later than June, 2025 As our Data QA associate, you will: Perform accurate and efficient data extraction, labeling, and cleaning from email panel or AI-generated dataset to ensure high-quality data. Monitor and analyze data trends for various merchants and vendors, including email categorization and trend identification. Maintain the stability of merchant and vendor data, investigating anomalies such as sudden drops or spikes in data volume. Collaborate with teams in both China and the U.S. to ensure data consistency and quality. Provide timely responses to analysts and clients' demands and inquiries. Ensure compliance with company data security and confidentiality policies. You Are Likely To Succeed If you have: Bachelor's degree or above. Majors in Computer Science, Statistics, Business Analytics, or a related field are preferred; 0 - 2 years of experience as a data or quality assurance analyst; Experience in data tagging, cleaning or Regex Experience managing multiple processes in parallel Basic data analysis skills and familiarity with Excel, SQL, or Python are a plus. Exceptional attention to detail, problem-solving and critical thinking abilities Ability to work independently while coordinating with a remote team. Excellent written and verbal communication skills in English, with the ability to interact effectively with vendors and internal teams across time zones and cultures What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer.

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 56,000 employees in nearly 30 countries, is recognized for its consulting, digital services and software development. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organizations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2023, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Good functional knowledge in standard SAP Production modules along with integration with other SAP modules. The relevant solution capabilities for the product are: Master Planning PP-MP Demand Management ( PP-MP-DEM) & Long Term planning ( PP-MP-DEM) Capacity Planning ( PP - CRP) - Material Requirements Planning ( PP-MRP) Repetitive Manufacturing ( PP-REM) Production lot planning/individual project planning Assembly to order ( LO-ASM) - Production Planning for Process Industries ( PP-PI) Familiarity the Manufacturing processes. Comfortable with components of SAP PP such as BOM ( PP-BD-BOM), Production version, Work Center,Routings ,Production Planning Cycle & the concerned dataset ( table structure). The candidate should have worked on integrated systems and should be comfortable with monitoring of interfaces and applications. The candidate must be familiar with working on heavily customized Objects. A basic understanding of SAP ABAP along with debugging is a plus point. Experience working in Project, Application Support Services in an end-user facing position, Familarity with Incident Management and Problem Management along with an understanding of Business priority and criticality. He/she should have also worked on Change Management processes Minimum one experience in Support or customer service. Total Experience Expected: 04-06 years Qualifications BE/B.Tech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 1 month ago

Apply

12.0 years

6 - 9 Lacs

Hyderābād

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Principal / Senior Systems Performance Engineer Micron Data Center and Client Workload Engineering in Hyderabad, India, is seeking a senior/principal engineer to join our dynamic team. The successful candidate will primarily contribute to the ML development, ML DevOps, HBM program in the data center by analyzing how AI/ML workloads perform on the latest MU-HBM, Micron main memory, expansion memory and near memory (HBM/LP) solutions, conduct competitive analysis, showcase the benefits that workloads see with MU-HBM’s capacity / bandwidth / thermals, contribute to marketing collateral, and extract AI/ML workload traces to help optimize future HBM designs. Job Responsibilities: The Job Responsibilities include but are not limited to the following: Design, implement, and maintain scalable & reliable ML infrastructure and pipelines. Collaborate with data scientists and ML engineers to deploy machine learning models into production environments. Automate and optimize ML workflows, including data preprocessing, model training, evaluation, and deployment. Monitor and manage the performance, reliability, and scalability of ML systems. Troubleshoot and resolve issues related to ML infrastructure and deployments. Implement and manage distributed training and inference solutions to enhance model performance and scalability. Utilize DeepSpeed, TensorRT, vLLM for optimizing and accelerating AI inference and training processes. Understand key care abouts when it comes to ML models such as: transformer architectures, precision, quantization, distillation, attention span & KV cache, MoE, etc. Build workload memory access traces from AI models. Study system balance ratios for DRAM to HBM in terms of capacity and bandwidth to understand and model TCO. Study data movement between CPU, GPU and the associated memory subsystems (DDR, HBM) in heterogeneous system architectures via connectivity such as PCIe/NVLINK/Infinity Fabric to understand the bottlenecks in data movement for different workloads. Develop an automated testing framework through scripting. Customer engagements and conference presentations to showcase findings and develop whitepapers. Requirements: Strong programming skills in Python and familiarity with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience in data preparation: cleaning, splitting, and transforming data for training, validation, and testing. Proficiency in model training and development: creating and training machine learning models. Expertise in model evaluation: testing models to assess their performance. Skills in model deployment: launching server, live inference, batched inference Experience with AI inference and distributed training techniques. Strong foundation in GPU and CPU processor architecture Familiarity with and knowledge of server system memory (DRAM) Strong experience with benchmarking and performance analysis Strong software development skills using leading scripting, programming languages and technologies (Python, CUDA, C, C++) Familiarity with PCIe and NVLINK connectivity Preferred Qualifications: Experience in quickly building AI workflows: building pipelines and model workflows to design, deploy, and manage consistent model delivery. Ability to easily deploy models anywhere: using managed endpoints to deploy models and workflows across accessible CPU and GPU machines. Understanding of MLOps: the overarching concept covering the core tools, processes, and best practices for end-to-end machine learning system development and operations in production. Knowledge of GenAIOps: extending MLOps to develop and operationalize generative AI solutions, including the management of and interaction with a foundation model. Familiarity with LLMOps: focused specifically on developing and productionizing LLM-based solutions. Experience with RAGOps: focusing on the delivery and operation of RAGs, considered the ultimate reference architecture for generative AI and LLMs. Data management: collect, ingest, store, process, and label data for training and evaluation. Configure role-based access control; dataset search, browsing, and exploration; data provenance tracking, data logging, dataset versioning, metadata indexing, data quality validation, dataset cards, and dashboards for data visualization. Workflow and pipeline management: work with cloud resources or a local workstation; connect data preparation, model training, model evaluation, model optimization, and model deployment steps into an end-to-end automated and scalable workflow combining data and compute. Model management: train, evaluate, and optimize models for production; store and version models along with their model cards in a centralized model registry; assess model risks, and ensure compliance with standards. Experiment management and observability: track and compare different machine learning model experiments, including changes in training data, models, and hyperparameters. Automatically search the space of possible model architectures and hyperparameters for a given model architecture; analyze model performance during inference, monitor model inputs and outputs for concept drift. Synthetic data management: extend data management with a new native generative AI capability. Generate synthetic training data through domain randomization to increase transfer learning capabilities. Declaratively define and generate edge cases to evaluate, validate, and certify model accuracy and robustness. Embedding management: represent data samples of any modality as dense multi-dimensional embedding vectors; generate, store, and version embeddings in a vector database. Visualize embeddings for improvised exploration. Find relevant contextual information through vector similarity search for RAGs. Education: Bachelor’s or higher (with 12+ years of experience) in Computer Science or related field. About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 1 month ago

Apply

4.0 - 6.0 years

5 - 7 Lacs

Bengaluru

On-site

SAP PP Functional Analyst Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 56,000 employees in nearly 30 countries, is recognized for its consulting, digital services and software development. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organizations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2023, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Good functional knowledge in standard SAP Production modules along with integration with other SAP modules. The relevant solution capabilities for the product are: Master Planning PP-MP Demand Management ( PP-MP-DEM) & Long Term planning ( PP-MP-DEM) Capacity Planning ( PP - CRP) - Material Requirements Planning ( PP-MRP) Repetitive Manufacturing ( PP-REM) Production lot planning/individual project planning Assembly to order ( LO-ASM) - Production Planning for Process Industries ( PP-PI) Familiarity the Manufacturing processes. Comfortable with components of SAP PP such as BOM ( PP-BD-BOM), Production version, Work Center,Routings ,Production Planning Cycle & the concerned dataset ( table structure). The candidate should have worked on integrated systems and should be comfortable with monitoring of interfaces and applications. The candidate must be familiar with working on heavily customized Objects. A basic understanding of SAP ABAP along with debugging is a plus point. Experience working in Project, Application Support Services in an end-user facing position, Familarity with Incident Management and Problem Management along with an understanding of Business priority and criticality. He/she should have also worked on Change Management processes Minimum one experience in Support or customer service. Total Experience Expected: 04-06 years Qualifications BE/B.Tech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 1 month ago

Apply

0.0 - 2.0 years

5 - 7 Lacs

Bengaluru

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. About The Role: Our product offers insights on E-commerce companies and is growing rapidly. We are seeking a Data QA Associate to join our team in India to manage a large portion of the data cleaning and quality assurance processes for this product. As a Data QA Associate, you will be responsible for transforming the receipt data into accurate business expense insights. You will work with a top notch engineering team and data team from the U.S team and China team. You'll complete your work primarily using data in our proprietary software. This position offers the opportunity to meaningfully contribute to the growth of our company. Exceptional employees may have the opportunity to be promoted and manage a team of other Data QA Specialists. This is a fully remote role based in India. Working hours: 8 am - 5 pm IST In general, we expect some overlap with Chinese or U.S. work hours. More details on work-hour expectations will be shared during the recruiting process. We expect hires to start in the position as soon as possible, and no later than June, 2025 As our Data QA associate, you will: Perform accurate and efficient data extraction, labeling, and cleaning from email panel or AI-generated dataset to ensure high-quality data. Monitor and analyze data trends for various merchants and vendors, including email categorization and trend identification. Maintain the stability of merchant and vendor data, investigating anomalies such as sudden drops or spikes in data volume. Collaborate with teams in both China and the U.S. to ensure data consistency and quality. Provide timely responses to analysts and clients' demands and inquiries. Ensure compliance with company data security and confidentiality policies. You Are Likely To Succeed If you have: Bachelor's degree or above. Majors in Computer Science, Statistics, Business Analytics, or a related field are preferred; 0 - 2 years of experience as a data or quality assurance analyst; Experience in data tagging, cleaning or Regex Experience managing multiple processes in parallel Basic data analysis skills and familiarity with Excel, SQL, or Python are a plus. Exceptional attention to detail, problem-solving and critical thinking abilities Ability to work independently while coordinating with a remote team. Excellent written and verbal communication skills in English, with the ability to interact effectively with vendors and internal teams across time zones and cultures What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Company Description DeepLLMData focuses on coding SFT, RLHF, STEM, PhD, Maths, Image and Video data annotators and provisioning. Role Description This is a Part-time remote role for LLM Model training: Agentic Dataset Curation. Somebody with good Python experience , understanding of . The role involves curating agentic datasets asks on a day-to-day basis. Please note that you should have worked in the data curation for Agentic AI. Qualifications Good understanding of Agentic flows. Expertise in Python programming Experience in dataset curation and LLM model training Ability to work independently and remotely Excellent problem-solving and critical thinking skills

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

🚨 Hiring: AI/NLP Intern – Clinical Note Generation & LLM Evaluation (Remote) 📅 Start: ASAP | 🕒 Duration: 3–6 months | 💼 Unpaid (Potential for paid extension) We're a healthcare AI startup building agentic tools for medical documentation, patient monitoring, and insurance workflows . Our stack combines LLMs, multimodal AI, and real-time speech processing , backed by talent from NVIDIA, JP Morgan Chase, EY, and UC Health. We’re looking for a sharp AI/NLP intern to help us evaluate and fine-tune language models for SOAP note generation . You’ll work on real-world clinical data, build benchmarks, and integrate models into live systems. Perfect for students or early-career researchers passionate about applied AI in healthcare. 🔍 What You'll Work On: Fine-tune & benchmark LLMs (LLaMA, Mistral, MedAlpaca) Evaluate models using BLEU, ROUGE, BERTScore, and factuality Build NER + entity linking pipelines Deploy APIs (FastAPI) and work with front-end engineers Run error analyses, iterate prompts, explore new ideas ✅ Must-Have Skills: Python + HuggingFace Transformers + basic ML/NLP Understanding of prompting, attention, tokenization Jupyter or Colab for experimentation 💡 Bonus: FastAPI, React, full-stack experience Knowledge of clinical NLP or biomedical informatics 🔬 Evaluation Task: We shortlist candidates via a hands-on task using the ACI-Bench dataset (doctor–patient conversations → SOAP notes). You'll benchmark open-source LLMs and submit a short report or notebook. Details sent after initial screening. But, proactive candidates can already: Understand the Data : https://github.com/wyim/aci-bench/blob/main/data/challenge_data/train.csv Analyze the format of the doctor-patient encounters and how they map to the structured clinical notes. Explore the narrative and SOAP-style elements in the provided notes. Select and Apply LLMs - Choose one or more open-source LLMs (e.g., LLaMA, Mistral, MedAlpaca) that can be used to generate notes from the conversations. Decide whether to use zero-shot prompting, few-shot learning, or fine-tuning approaches. Define Evaluation Metrics - focus on hallucination and faithfulness 🎁 Perks: Work on real AI-for-healthcare systems Mentorship from experts High-impact portfolio project Strong LoR + possible transition to paid contract 📩 Apply by sending your resume + GitHub (if any) + a short blurb on your NLP experience to [founder@saans.ai]. Let’s build the future of clinical care—together. 🚀 #Hiring #AIInternship #NLP #LLM #HealthcareAI #GenerativeAI #RemoteJobs #InternshipOpportunity #MedTech #MachineLearning #TechForGood #DigitalHealth #AIInHealthcare #InternshipOpportunity

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies