Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
90.0 years
0 Lacs
Delhi
Remote
Newsweek is the global media organization that has earned audience time and trust for more than 90 years. Newsweek reaches 100 million people each month with thought-provoking news, opinion, images, graphics, and video delivered across a dozen print and digital platforms. Headquartered in New York City, Newsweek also publishes international editions in EMEA and Asia. Newsweek is hiring a Researcher to join a growing team in India. The Researcher conducts desk research, engages with experts, and provides contextual insights to supplement quantitative data. This role supports methodology development and ensures the project reflects current trends and expert perspectives. The Researcher also identifies and collects individual (non-dataset) data elements. Key Responsibilities: Conduct desk research to identify relevant data sources, data, and contextual information. Identify and engage with experts, decision-makers, and stakeholders for surveys and interviews. Support the Project Manager and Data Analyst with qualitative insights and background research. Contribute to the development and validation of the ranking methodology. Document research findings and expert input for project transparency. Main Deliverables: Desk research summaries and source documentation. Methodology documentation and validation reports. Design and execution of surveys. Analysis of survey and expert panel results. Collaboration & Integration: Collaborates with the Data Analyst to ensure data is contextualized and validated. Provides the Project Manager with research updates and expert feedback. Supports the integration of research findings into editorial and ranking outputs. Required skills Experience in a research environment Familiarity with the design, execution, and analysis of surveys Familiarity with relevant survey tools General Expectations for All Roles: Demonstrate adaptability and proactive problem-solving. Maintain clear and timely communication within the team and with stakeholders. Uphold high standards of data integrity, research rigor, and project transparency. Contribute to a collaborative and inclusive project culture. Ability to work remotely Newsweek is an equal opportunity employer. We seek employees of diverse backgrounds and are committed to providing an inclusive, equitable and respectful workplace.
Posted 12 hours ago
1.0 years
2 - 3 Lacs
Gurgaon
On-site
DESCRIPTION Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging the latest technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. Key job responsibilities Process annotation & data analysis tasks with high efficiency and quality in a fast paced environment Provide floor support to Operations manager in running day to day operations working closely with Data Associates Handle work prioritization and deliver based on business needs Track and report ops metrics and ensure delivery on all KPIs and SLAs You will work closely with your team members and managers to drive process efficiencies and explore opportunities for automation You will strive to enhance the productivity and effectiveness of the data generation and annotation processes The tasks will be primarily repetitive in nature and will require the individual to make judgment-based decisions keeping in mind the guidelines provided in the SOP. BASIC QUALIFICATIONS Graduate or equivalent (up to 1 year of experience) Candidate must demonstrate language proficiency in French language for the following: verbal, writing, reading and comprehension. Required language level: B2.2/BA/Advanced Diploma Good English language proficiency: verbal, writing, reading and comprehension Strong analytical and communication skills Passion for delivering a positive customer experience, and maintain composure in difficult situations Ability to effectively and efficiently complete challenging goals or assignments within defined SLA PREFERRED QUALIFICATIONS Basic level of Excel knowledge Familiarity with online retail (e-commerce industry) Previous experience as AI trainers, knowledge of AI and NLP Experience with Artificial Intelligence interaction, such as prompt generation and open AI's Experience in content or editorial writing Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 12 hours ago
5.0 years
4 - 8 Lacs
Chennai
On-site
DESCRIPTION Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. BASIC QUALIFICATIONS 5+ years of SQL experience Experience programming to extract, transform and clean large (multi-TB) data sets Experience with theory and practice of design of experiments and statistical analysis of results Experience with AWS technologies Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience with theory and practice of information retrieval, data science, machine learning and data mining PREFERRED QUALIFICATIONS Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 12 hours ago
1.0 - 3.0 years
8 - 9 Lacs
Chennai
On-site
Job requisition ID :: 85384 Date: Jul 31, 2025 Location: Chennai Designation: Assistant Manager Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Innovation, transformation and leadership occur in many ways. At Deloitte, our ability to help solve clients’ most complex issues is distinct. We deliver strategy and implementation, from a business and technology view, to help you lead in the markets where you compete. Your work profile. Overview of role and associated job description Economic Sanctions and Screening team responsible for developing, enhancing, and validating sanctions screening frameworks and models. This role plays a pivotal part in fortifying the firm's financial crime risk posture by ensuring our screening capabilities align with global regulatory expectations and industry best practices. Responsibilities Design and Development: Develop, test, and optimize sanctions screening frameworks including name and payment screening methodologies. Model Enhancement: Evaluate existing sanctions models (fuzzy matching algorithms, rules logic, thresholds) and propose enhancements based on regulatory guidance and operational feedback. Framework Review: Conduct periodic and event-driven reviews of sanctions screening models to ensure continued relevance and compliance with OFAC, EU, UN, and other regulatory standards. Scenario Calibration and Tuning: Support tuning and threshold analysis for match scoring, leveraging historical alert data and false positive metrics. Data Analytics and Insights: Analyze screening outcomes to identify gaps, trends, and patterns that inform risk mitigation strategies and enhance effectiveness. Documentation and Governance: Prepare and maintain comprehensive model documentation, validation reports, and technical specifications in line with model risk governance frameworks. Stakeholder Engagement: Collaborate with compliance officers, technology teams, and business stakeholders to gather requirements, explain model logic, and support audits and regulatory reviews. Desired qualifications Required Experience and Skills: Domain Expertise: 1-3 years of hands-on experience in sanctions screening framework development, tuning, and validation. Familiarity with leading screening platforms (e.g., FircoSoft, Bridger, Actimize, Oracle Watchlist Screening, etc.) and list management practices. In-depth understanding of global sanctions regimes (OFAC, EU, UN, HMT) and related regulatory expectations. Experience in integrating sanctions screening models with broader AML/CFT frameworks. Exposure to AI/ML techniques for entity resolution or fuzzy matching optimization. Prior involvement in regulatory examinations or independent validations of screening tools. Technical Proficiency: Strong programming and scripting skills (Python, R, SQL, SAS). Experience in data modeling, scoring logic calibration, and large-scale dataset analysis. Analytical Thinking: Ability to conduct root cause analysis on alert quality issues. Strong quantitative and qualitative problem-solving capabilities. Communication: Strong written and verbal communication skills, including the ability to explain technical models to non-technical stakeholders. Ability to craft data-backed narratives and present recommendations with clarity. Education Bachelor's Degree / Master's Degree Location and way of working Base location: Chennai This profile involves occasional travelling to client location. Hybrid is our default way of working. Each domain has customized the hybrid approach to their unique needs. Your role as an Assistant Manager We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Assistant Manager across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviours and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.
Posted 12 hours ago
1.0 years
0 Lacs
Chennai
On-site
DESCRIPTION Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging the latest technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. Key job responsibilities Process annotation & data analysis tasks with high efficiency and quality in a fast paced environment Provide floor support to Operations manager in running day to day operations working closely with Data Associates Handle work prioritization and deliver based on business needs Track and report ops metrics and ensure delivery on all KPIs and SLAs You will work closely with your team members and managers to drive process efficiencies and explore opportunities for automation You will strive to enhance the productivity and effectiveness of the data generation and annotation processes The tasks will be primarily repetitive in nature and will require the individual to make judgment-based decisions keeping in mind the guidelines provided in the SOP. BASIC QUALIFICATIONS Graduate or equivalent (up to 1 year of experience) Candidate must demonstrate language proficiency in French language for the following: verbal, writing, reading and comprehension. Required language level: B2.2/BA/Advanced Diploma Good English language proficiency: verbal, writing, reading and comprehension Strong analytical and communication skills Passion for delivering a positive customer experience, and maintain composure in difficult situations Ability to effectively and efficiently complete challenging goals or assignments within defined SLA PREFERRED QUALIFICATIONS Basic level of Excel knowledge Familiarity with online retail (e-commerce industry) Previous experience as AI trainers, knowledge of AI and NLP Experience with Artificial Intelligence interaction, such as prompt generation and open AI's Experience in content or editorial writing Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 12 hours ago
3.0 - 8.0 years
5 - 8 Lacs
Noida
On-site
Years of Experience: 3-8 Years Location: Noida Work from Office Job Description The role involves managing and organizing large-scale video datasets, building automation tools, and ensuring accurate performance evaluations of models. The ideal candidate should be proactive, hands-on, and capable of handling a small team. Key Responsibilities Data Cataloging & Management Maintain structured catalogs of video data with consistent labeling and metadata. Organize datasets for efficient access, versioning, and reuse across model development cycles. Tool Development & Automation Build or assist in developing internal tools to automate data handling, quality checks, and reporting. Streamline data pipelines to support rapid model development and testing. Accuracy Computation & Reporting Implement evaluation pipelines to compute model metrics such as accuracy, precision, recall, etc. Generate regular performance reports to support model tuning and validation efforts. Team Collaboration & Coordination Lead a small team (up to 3 members) in daily data-related activities, ensuring quality and timely delivery. Coordinate with ML engineers, QA teams, and product stakeholders for end-to-end data lifecycle management. Qualifications & Required Skills B.Tech Experience in data analysis, preferably in video/image-based domains. Desirable knowledge of data handling tools like Python (pandas, NumPy), SQL, and Excel. Familiarity with video annotation workflows, dataset versioning, and evaluation techniques. Experience in building or using automation tools for data processing. Ability to manage tasks and mentor junior team members effectively. Good communication and documentation skills.
Posted 12 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do Apply the knowledge of data characteristics and data supply pattern, develop rules and tracking process to support data quality model. Prepare data for analytical use by building data pipelines to gather data from multiple sources and systems. Integrate, consolidate, cleanse and structure data for use by our clients in our solutions. Perform design, creation, and interpretation of large and highly complex datasets. Stay up-to-date with the latest trends and advancements in GCP and related technologies, actively proposing and evaluating new solutions. Understand best practices for data management, maintenance, reporting and security and use that knowledge to implement improvements in our solutions. Implement security best practices in pipelines and infrastructure. Develop and implement data quality checks and troubleshoot data anomalies. Provide guidance and mentorship to junior data engineers. Review dataset implementations performed by junior data engineers. What Experience You Need BS degree in a STEM major or equivalent discipline; Master’s Degree strongly preferred 5+ years of experience as a data engineer or related role Cloud certification strongly preferred Advanced skills using programming languages such as Python or SQL and intermediate level experience with scripting languages Intermediate level understanding and experience with Google Cloud Platforms and overall cloud computing concepts, as well as basic knowledge of other cloud environments Experience building and maintaining moderately-complex data pipelines, troubleshooting issues, transforming and entering data into a data pipeline in order for the content to be digested and usable for future projects Experience designing and implementing moderately complex data models and experience enabling optimization to improve performance Demonstrates advanced Git usage and CI/CD integration skills What Could Set You Apart Experience in AI/ML engineering. Knowledgeable in cloud security and compliance. Experience mentoring engineers or leading training. Proficient with data visualization tools. Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs
Posted 15 hours ago
0 years
0 Lacs
India
Remote
Summary We are seeking experienced Engineering experts to develop a set of original, high-level question & answer pairs, which must be multimodal - containing essential visual components. This project supports the evaluation and training of advanced AI systems through the creation of reasoning-intensive, graduate-level problems that are resistant to surface-level AI solutions. Project Overview Our goal is to build a challenging dataset that pushes the boundaries of current AI capabilities in question answering. The questions must test deep conceptual understanding, multi-step reasoning, and problem-solving physics. The questions must incorporate visuals (e.g., graphs, diagrams, models) that are essential to solving the problem. Each question will be tested against the latest AI model and must include a conversation log demonstrating the model’s performance with and without hints. Challenges Relevant fields include but are not limited to: Electrical Engineering – Circuit design, electromagnetics, signal processing, control systems. Mechanical Engineering – Thermodynamics, fluid mechanics, dynamics, material science, system design. Civil Engineering – Structural analysis, geotechnical engineering, transportation systems, construction planning. Chemical Engineering – Process engineering, chemical reactions, thermodynamics, materials and safety design. Biomedical Engineering – Medical device design, biomechanics, bioinstrumentation, tissue engineering. Environmental Engineering – Wastewater treatment, air quality control, environmental impact assessment, sustainability solutions. Computer Engineering – Embedded systems, hardware design, computer architecture, digital systems. Questions must be challenging for current AI models, requiring expert-level knowledge and reasoning. Questions should be "Google-proof", meaning they cannot be easily answered through simple web searches. Answers must be unambiguous and verifiable to ensure consistency in AI evaluation. Responsibilities Create high-difficulty questions (~graduate-level or upper undergraduate level) in your area of expertise. Write a detailed explanation and/or solution for each question. Ensure questions are original and not copied from textbooks or previous tests. Must be able to provide concise, informative hints designed to help advanced AI models reason through and answer the question correctly. Qualifications We are seeking individuals with the following profile: Education: A Master’s degree (or higher) is nice to have. Ph.D. holders are strongly preferred. Expertise: Deep academic or professional knowledge related field. Writing Skills: Excellent written English; ability to write clear, rigorous questions and explanations Reasoning Skills: Ability to craft problems requiring advanced, multi-step reasoning rather than rote recall. Image Design: Comfortable identifying or creating visuals (e.g., graphs, diagrams) that are crucial to solving problems. Compensation Competitive rates based on the number and quality of questions submitted. Opportunity for ongoing collaboration as we expand the dataset and undertake future projects. Why Join Us Help shape the next generation of AI by providing expert-level, original content. Contribute to a globally impactful project in AI research and STEM education. Collaborate remotely with a professional, motivated team. Enjoy flexibility and independence in your working hours. Application Instruction To apply via LinkedIn, please submit: Your resume or CV, emphasizing your academic background and subject expertise. A sample of prior academic work, such as original questions, assessments, or published content (if available). Additional Notes This is a remote freelance role with flexible scheduling. All content will be reviewed and must pass originality and quality checks. Questions will undergo a review process to ensure they meet the dataset’s standards
Posted 15 hours ago
7.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: LCNC Developer & Solution Designer – Power Maps & Dataverse Specialist Location: Gurgaon About the Role We are looking for a proactive professional to design and develop innovative low-code/no‑code business solutions using Microsoft Power Platform. The key focus will be on leveraging Microsoft Dataverse and creating engaging Power Maps visualizations to solve complex, data-driven business challenges. Key Responsibilities Solution Design & Technical Architecture Translate business requirements into robust LCNC solution architectures involving Dataverse and Power Maps Design scalable data models (tables, relationships, security roles) in Dataverse to support advanced visualizations Develop visual mapping solutions using Power Maps (3D Maps/Power Map) linked to Dataverse databases Development & Automation Create Canvas and Model-driven Power Apps integrated with Dataverse Build Power Automate flows to import external data, update Dataverse, and trigger dynamic Power Maps updates Integration & Customization Implement custom connectors, APIs, or Azure-based integrations to sync data with Dataverse or external systems Use Power Fx, DAX, Power Query (M), JavaScript, TypeScript, or C# where custom logic or plugins are required Governance, Performance & Best Practices Advise and apply governance controls (e.g. DLP, security roles, ALM) and CI/CD practices for Power Platform environments Optimize solution performance and scalability (including API thresholds, dataset sizes, and user licenses) **Stakeholder Collaboration & Mentorship** Collaborate with business stakeholders, analysts, and IT teams to define requirements and translate them into technical solutions Conduct training sessions or knowledge-sharing workshops for citizen developers or junior team members Qualifications & Experience Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience) 7-10 years hands-on experience in Power Platform development, including Dataverse, Power Apps, Power Automate, and Power BI Demonstrable experience with low-code/no-code application design and solution architecture Proficiency in data transformation (Power Query, DAX), formula language (Power Fx), and basics of JavaScript/TypeScript/C# for customization Experience creating visual data representations such as Power Maps or BI visual dashboards Understanding of integration patterns with external systems or databases Knowledge or certification in Microsoft Power Platform Developer (PL‑400) or functional consultant (PL‑200) preferred Interested Candidate can share their resume to hr@trailytics.com
Posted 16 hours ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team as a Senior Data Scientist, reporting directly to the Lead Data Scientist in India. You will play a crucial role in building, optimizing, and maintaining AI-ready data infrastructure for advanced Generative AI applications. Your focus will be on hands-on implementation of cutting-edge data extraction, curation, and metadata enhancement techniques for both text and numerical data. You will be a key contributor to the development of innovative solutions, ensuring rapid iteration and deployment, and supporting the Lead in achieving the team's strategic goals. What Will You Be Doing AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Mentorship: Act as a technical mentor and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 2+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 1+ years Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.
Posted 17 hours ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team to lead a new group in India, focused on creating and maintaining AI-ready data. As the point of contact in Mumbai, you will guide the local team and ensure seamless collaboration with our global counterparts. Your contributions will directly impact the development of innovative solutions used by industry leaders worldwide, supporting text and numerical data extraction, curation, and metadata enhancements to accelerate development and ensure rapid response times. You will play a pivotal role in transforming how our data are seamlessly integrated with AI systems, paving the way for the next generation of customer interactions. What Will You Be Doing Lead and Develop the Team: Oversee a team of data scientists in Mumbai. Mentoring and guiding junior team members, fostering their professional growth and development. Strategic Planning: Develop and implement strategic plans for data science projects, ensuring alignment with the company's goals and objectives. AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced Generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Leadership: Act as a technical leader and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Leadership Experience: Proven track record in leading and mentoring data science teams, with a focus on strategic planning and operational excellence. Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 5+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 2+ years of Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.
Posted 17 hours ago
5.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Senior Applied Scientist Bangalore, Karnataka, India Date posted Aug 01, 2025 Job number 1854651 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview Do you want to be part of a team which delivers innovative products and machine learning solutions across Microsoft to hundreds of millions of users every month? Microsoft Turing team is an innovative engineering and applied research team working on state-of-the-art deep learning models, large language models and pioneering conversational search experiences. The team spearheads the platform and innovation for conversational search and the core copilot experiences across Microsoft’s ecosystem including BizChat, Office and Windows. As a Senior Applied Scientist in the Turing team, you will be involved in tight timeline-based hands on data science activity and work, including training models, creating evaluation sets, building infrastructure for training and evaluation, and more. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. 3+ years of industrial experience coding in C++, C#, C, Java or Python. Prior experience with data analysis or understanding, looking at data from a large scale systems to identify patterns or create evaluation datasets. Familiarity with common machine learning, deep learning frameworks and concepts, using use of LLMs, prompting. Experience in pytorch or tensorflow is a bonus. Ability to communicate technical details clearly across organizational boundaries. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check : This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Solid ability and effectiveness working end-to-end in a challenging technical problem domain (plan, design, execution, continuous release, and service operation). Some prior experience in applying deep learning techniques and drive end-to-end AI product development (Search, Recommendation, NLP, Document Understanding, etc). Prior experience with Azure or any other cloud pipelines or execution graphs. Self-driven, results oriented, high integrity, ability to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Customer/End-result/Metrics driven in design and development. Keen ability and motivation to learn, enter new domains, and manage through ambiguity. Solid publication track records at top conferences like ACL, EMNLP, SIGKDD, AAAI, WSDM, COLING, WWW, NIPS, ICASSP, etc. #M365Core Responsibilities As an Applied Scientist on our team, you'll be responsible for and will engage in: Driving projects from design through implementation, experimentation and finally shipping to our users. This requires deep dive into data to identify gaps, come up with heuristics and possible solutions, using LLMs to create the right model or evaluation prompts, and setup the engineering pipeline or infrastructure to run them. Come up with evaluation techniques, datasets, criteria and metrics for model evaluations. These are often SOTA models or metrics / datasets. Hands on own the fine-tuning, use of language models, including dataset creation, filtering, review, and continuous iteration. This requires working in a diverse geographically distributed team environment where collaboration and innovation are valued. You will have an opportunity for direct impact on design, functionality, security, performance, scalability, manageability, and supportability of Microsoft products that use our deep learning technology. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 20 hours ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description At Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging cuttingedge technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. Key job responsibilities The Candidate Actively Seeks To Understand Amazon’s Core Business Values And Initiatives, And Translates Those Into Everyday Practices. Some Of The Key Result Areas Include, But Not Limited To Experience in managing process and operational escalations Driving appropriate data oriented analysis, adoption of technology solutions and process improvement projects to achieve operational and business goal Managing stakeholder communication across multiple lines of business on operational milestones, process changes and escalations Communicate and take the lead role in identifying gaps in process areas and work with all stakeholders to resolve the gaps Be a SME for the process and a referral point for peers and junior team members Has the ability to drive business/operational metrics through quantitative decision making, and adoption of different tools and resources Ability to meet deadlines in a fast paced work environment driven by complex software systems and processes Ability to perform deep dive in the process and come up with process improvement solutions Shall collaborate effectively with other teams and subject matter experts (SMEs), Language Engineers (LaEs) to support launches of new process and services Basic Qualifications A Bachelor’s Degree and relevant work experience of 3+ years. Excellent level of English and either of Spanish/French/Italian/Portuguese, C1 level. Candidate must demonstrate ability to analyze and interpret complex SOPs. Excellent problem-solving skills with a proactive approach to identifying and implementing process improvements. Strong communication and interpersonal skills to effectively guide and mentor associates. Ability to work collaboratively with cross-functional teams. Thoroughly understand multiple SOPs and ensure adherence to established processes. Identify areas for process improvement and SOP enhancement, and develop actionable plans for implementation. Lead and participate in process improvement initiatives. Comfortable working in a fast paced, highly collaborative, dynamic work environment Willingness to support several projects at one time, and to accept re-prioritization as necessary. Adaptive to change and able to work in a fast-paced environment. Preferred Qualifications Experience with Artificial Intelligence interaction, such as prompt generation. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana - D50 Job ID: A3049130
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 321767BR Job Type Full Time Your role Do you have a curious mind, want to be involved in the latest technology trends and like to solve problems that have a meaningful benefit to hundreds of users across the bank? Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the banks global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators and AI agents) to easily discover, access data and build AI use-cases. Responsibilities include: direct interaction with product owners and internal users to identify requirements, development of technical solutions and execution develop an SDK (Software Development Kit) to automatically capture Data Product, Dataset and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes design and implementation of services that seamlessly collects runtime evidence and operational information about a data product or model and publishes it to appropriate visualization tools creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem. design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a 'need to know' basis Your team You will be part of the Data Product Framework team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. Your expertise PHD or Master’s degree in Computer Science or any related advanced quantitative discipline 5+ years industry experience with Python / Pandas, SQL / Spark, Azure fundamentals / Kubernetes and Gitlab additional experience in data engineering frameworks (Databricks / Kedro / Flyte), ML frameworks (MLFlow / DVC) and Agentic Frameworks (Langchain, Langgraph, CrewAI) is a plus ability to produce secure and clean code that is stable, scalable, operational, and well-performing. Be up to date with the latest IT standards (security, best practices). Understanding the security principles in the banking systems is a plus ability to work independently, manage individual project priorities, deadlines and deliverables willingness to quickly learn and adopt various technologies excellent English language written and verbal communication skills About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 day ago
15.0 years
0 Lacs
India
Remote
Who we are E2C International is a US-based company committed to providing a variety of cost-efficient, professionally trained, motivated remote workers for individuals and their organizations so they can focus on growing their businesses. We pride ourselves in assisting companies in reducing costs and increasing efficiency, leading to massive growth. Our strength comes from our global community and our power is driven by leveraging that to connect our clients with top talent worldwide. Job Purpose and Role We are looking for an AI Project Manager to lead the end-to-end execution of the Genetic AI Teacher : an autonomous, real-time, personalized virtual educator trained using deep learning, multimodal LLMs, and real classroom interactions. You will harness the power of our clients unprecedented video dataset to train the world’s most effective AI educator—and drive systemic impact across the U.S. and beyond. Key Responsibilities: Project Ownership: Architect and execute the entire project lifecycle, from scoping to delivery, for the Genetic AI Teacher system. Team Collaboration: Work closely with our existing cross-functional team —spread across the U.S. and international time zones—developers, video engineers, and curriculum specialists. Data Engineering: Design pipelines for processing, proper annotating, and training on hundreds of terabytes of classroom video, including teacher-student interactions, whiteboard content, and live assessments. Model Strategy: Guide development and selection of foundational models (LLMs, vision-language, speech, etc.) and lead multi-phase training/fine-tuning. Cross-functional Alignment: Coordinate with product, engineering, legal, and district-facing teams to ensure ethical, scalable deployment aligned with FERPA and education compliance. Vendor Management: Manage partnerships with AI infrastructure providers. MVP: Work to implement a product quickly to test in our classrooms with approval from current clients. AI Tool selection: Work to develop the delivery tool for our AI Agent. Qualifications: 8–15 years of experience in AI, ML, or computer vision projects, video, or speech. Proven success managing complex, multi-stakeholder AI projects in fast-paced environments. Hands-on experience with large-scale video or multimodal training data pipelines. Fluency in LLMs, generative AI, multimodal learning, NLP/NLU, and reinforcement learning approaches. Strong leadership and collaborative skills Passion for transforming education through ethical and impactful AI. Nice-to-Have Qualifications: Advanced degree (PhD or Master’s) in AI, ML, Cognitive Science, or a related discipline. Experience developing AI tutors, intelligent agents, or persona-driven educational systems. Familiarity with U.S. K–12 compliance (FERPA, COPPA) and AI ethics frameworks. Multilingual abilities or international deployment experience. How to Apply: Your resume or CV A brief cover letter explaining your vision for this role (Optional) Links to projects, GitHub, publications, or relevant AI work
Posted 1 day ago
90.0 years
0 Lacs
India
Remote
Newsweek is the global media organization that has earned audience time and trust for more than 90 years. Newsweek reaches 100 million people each month with thought-provoking news, opinion, images, graphics, and video delivered across a dozen print and digital platforms. Headquartered in New York City, Newsweek also publishes international editions in EMEA and Asia. Newsweek is hiring a Researcher to join a growing team in India. The Researcher conducts desk research, engages with experts, and provides contextual insights to supplement quantitative data. This role supports methodology development and ensures the project reflects current trends and expert perspectives. The Researcher also identifies and collects individual (non-dataset) data elements. Key Responsibilities: Conduct desk research to identify relevant data sources, data, and contextual information. Identify and engage with experts, decision-makers, and stakeholders for surveys and interviews. Support the Project Manager and Data Analyst with qualitative insights and background research. Contribute to the development and validation of the ranking methodology. Document research findings and expert input for project transparency. Main Deliverables: Desk research summaries and source documentation. Methodology documentation and validation reports. Design and execution of surveys. Analysis of survey and expert panel results. Collaboration & Integration: Collaborates with the Data Analyst to ensure data is contextualized and validated. Provides the Project Manager with research updates and expert feedback. Supports the integration of research findings into editorial and ranking outputs. Required skills Experience in a research environment Familiarity with the design, execution, and analysis of surveys Familiarity with relevant survey tools General Expectations for All Roles: Demonstrate adaptability and proactive problem-solving. Maintain clear and timely communication within the team and with stakeholders. Uphold high standards of data integrity, research rigor, and project transparency. Contribute to a collaborative and inclusive project culture. Ability to work remotely Newsweek is an equal opportunity employer. We seek employees of diverse backgrounds and are committed to providing an inclusive, equitable and respectful workplace.
Posted 1 day ago
15.0 years
0 Lacs
Delhi, India
On-site
Project Lead the end-to-end execution of the AI Teacher Agent : an autonomous, real-time, personalized virtual educator trained using deep learning, multimodal LLMs, and real classroom interactions. You will harness the power of our clients unprecedented video dataset to train the world’s most effective AI educator—and drive systemic impact across the U.S. and beyond. What You’ll Do Project Ownership: Architect and execute the entire project lifecycle, from scoping to delivery, for the Genetic AI Teacher system. Team Collaboration: Work closely with our existing cross-functional team —spread across the U.S. and international time zones—developers, video engineers, and curriculum specialists. Data Engineering: Design pipelines for processing, proper annotating, and training on hundreds of terabytes of classroom video, including teacher-student interactions, whiteboard content, and live assessments. Model Strategy: Guide development and selection of foundational models (LLMs, vision-language, speech, etc.) and lead multi-phase training/fine-tuning. Cross-functional Alignment: Coordinate with product, engineering, legal, and district-facing teams to ensure ethical, scalable deployment aligned with FERPA and education compliance. Vendor Management: Manage partnerships with AI infrastructure providers. MVP: Work to implement a product quickly to test in our classrooms with approval from current clients. AI Tool selection: Work to develop the delivery tool for our Ai Agent. What You Bring 8–15 years of experience in AI, ML, or computer vision projects, video, or speech. Proven success managing complex, multi-stakeholder AI projects in fast-paced environments. Hands-on experience with large-scale video or multimodal training data pipelines. Fluency in LLMs, generative AI, multimodal learning, NLP/NLU, and reinforcement learning approaches. Strong leadership and collaborative skills Passion for transforming education through ethical and impactful AI. Bonus Points For: Advanced degree (PhD or Master’s) in AI, ML, Cognitive Science, or a related discipline. Experience developing AI tutors, intelligent agents, or persona-driven educational systems. Familiarity with U.S. K–12 compliance (FERPA, COPPA) and AI ethics frameworks. Multilingual abilities or international deployment experience. Why Join Us? Shape a flagship AI product designed to revolutionize education access. Collaborate with a mission-driven, globally distributed team. Operate with startup agility supported by an established, growth-stage company. Help create tools that empower teachers and inspire millions of students. How to Apply Your resume or CV A brief cover letter explaining your vision for this role (Optional) Links to projects, GitHub, publications, or relevant AI work
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 51,000 employees in nearly 30 countries, is recognized for its consulting, digital services and software development. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organizations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Good functional knowledge in standard SAP Production modules along with integration with other SAP modules. The relevant solution capabilities for the product are: Master Planning PP-MP Demand Management ( PP-MP-DEM) & Long Term planning ( PP-MP-DEM) Capacity Planning ( PP - CRP) - Material Requirements Planning ( PP-MRP) Repetitive Manufacturing ( PP-REM) Production lot planning/individual project planning Assembly to order ( LO-ASM) - Production Planning for Process Industries ( PP-PI) Familiarity the Manufacturing processes. Comfortable with components of SAP PP such as BOM ( PP-BD-BOM), Production version, Work Center,Routings ,Production Planning Cycle & the concerned dataset ( table structure). The candidate should have worked on integrated systems and should be comfortable with monitoring of interfaces and applications. The candidate must be familiar with working on heavily customized Objects. A basic understanding of SAP ABAP along with debugging is a plus point. Experience working in Project, Application Support Services in an end-user facing position, Familarity with Incident Management and Problem Management along with an understanding of Business priority and criticality. He/she should have also worked on Change Management processes Minimum one experience in Support or customer service. Total Experience Expected: 06-10 years Qualifications BE/B.Tech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 day ago
3.0 years
7 - 15 Lacs
Hyderābād
On-site
We are seeking an experienced Generative AI Developer with strong Python skills and proven hands-on experience building and deploying AI/ML models . In this role, you will work on designing, developing, and deploying innovative Generative AI solutions that create real business impact. You’ll contribute at every stage — from research and prototyping to production deployment. Key Responsibilities: Design, build, and deploy AI/ML models, with a strong focus on Generative AI (e.g., LLMs, diffusion models). Develop robust Python backend services to support AI applications in production. Integrate AI/ML models with scalable APIs and pipelines for real-world use cases. Optimize model performance, accuracy, and inference speed for production workloads. Experiment with model fine-tuning, prompt engineering, and dataset curation. Monitor deployed models and ensure smooth operation and updates. Collaborate with product managers, data scientists, and engineers to translate business needs into AI solutions. Document processes, maintain code quality, and follow best practices for reproducibility and scalability. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong proficiency in Python and relevant frameworks (FastAPI, Flask, Django). Hands-on experience building, training, and deploying AI/ML models into production. Familiarity with Generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs, etc.). Good understanding of NLP, LLMs, and modern AI techniques. Experience with RESTful APIs, cloud services (AWS/GCP/Azure), and CI/CD workflows. Excellent problem-solving, debugging, and troubleshooting skills. Ability to work independently and deliver reliable results on time. Excellent communication and teamwork skills. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you immediate Joiner? Experience: Python: 3 years (Required) Gen AI: 1 year (Required) Work Location: In person
Posted 1 day ago
5.0 years
4 - 8 Lacs
Chennai
On-site
- 5+ years of SQL experience - Experience programming to extract, transform and clean large (multi-TB) data sets - Experience with theory and practice of design of experiments and statistical analysis of results - Experience with AWS technologies - Experience in scripting for automation (e.g. Python) and advanced SQL skills. - Experience with theory and practice of information retrieval, data science, machine learning and data mining Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AlgoAnalytics is looking for a Data Scientist who has the passion for AI/ML solution development and cutting-edge work in Gen AI. Minimum 3 years of experience in AI/MI including Deep Learning, and exposure to Gen AI/LLM is required along with tools/techs/libraries like Agentic AI, Dify/Autogen, Langraphetc. Detailed understanding of Cloud technologies and Deployment with good hands-on experience is an added advantage. Key Responsibilities: AI/ML & Gen AI Development: Develop Gen AI/LLM models using OpenAI, Llama, or other open-source/offline models. Utilize Agentic AI frameworks and tools like Dify, Autogen, Langraph to build intelligent systems. Perform prompt engineering, fine-tuning, and dataset-specific model optimization. Implement cutting-edge research from AI/ML and Gen AI domains. Work on cloud-based deployments of AI/ML models for scalability and production readiness. Leadership & Mentoring: Troubleshoot AI/ML, Gen AI, and cloud deployment challenges. Mentor junior team members and contribute to their skill development. Identify team members for mentoring, hiring, and technical initiatives. Ensure smooth project execution, deadline management, and client interactions. Research & Innovation: Explore and implement recent AI/ML research in practical applications. Contribute to research publications, internal knowledge-sharing, and AI innovation. Qualifications: 2-3 years of hands-on experience in Machine Learning, Deep Learning, and Gen AI/LLMs. Bachelors/Masters degree in Computer Science, Engineering, Mathematics, or Statistics (with strong programming knowledge). Skills: Programming: Strong expertise in Python and relevant ML/AI libraries. AI/ML & Gen AI: Deep understanding of Machine Learning, Deep Learning, and Generative AI/LLMs. Agentic AI & Automation: Experience with Dify, Autogen, Langraph, and similar tools. Cloud & Deployment: Knowledge of cloud platforms (AWS, GCP, Azure) and MLOps deployment pipelines. Communication & Leadership: Strong ability to manage teams, meet project deadlines, and collaborate with clients. Note - We would prefer Pune based candidates. Candidates joining immediately will be preferred.
Posted 1 day ago
0 years
0 Lacs
India
Remote
📊 Remote Internship: Data Analyst Intern – Decode Data & Drive Action 📍 Location: Remote / Virtual 💼 Job Type: Internship (Unpaid) 🕒 Schedule: Flexible working hours 🔍 Are you curious about how numbers shape decisions? Step into the world of data with a remote internship designed to sharpen your analytical skills and give you exposure to real-world business challenges! 🌟 What This Internship Offers: We’re looking for an enthusiastic and analytical Data Analyst Intern to join our remote team. In this role, you’ll get the opportunity to work on live datasets, dig into patterns, and contribute to data-backed strategies. Whether you're a student, fresh graduate, or switching to a data-driven career, this internship will help you build core analytical skills and confidence working with data. 💼 What You’ll Experience: ✅ Virtual Setup – Work from anywhere, at your pace ✅ Time Flexibility – You decide your working hours ✅ Real Data Exposure – Analyze real-time information ✅ Skill Building – Strengthen your analytical toolkit ✅ Professional Growth – Gain mentorship and practical experience 🎯 Who We’re Looking For: 🎓 Currently studying or recently graduated in Data Analytics, IT, Statistics, or a similar field 📊 Comfortable working with spreadsheets (Excel/Sheets); basic knowledge of SQL/Python is a bonus 🧠 Critical thinker who enjoys solving problems through data 📈 Interested in converting raw numbers into useful stories 💬 Independent and proactive in a remote work setting 📅 Application Deadline: August 5th 🚀 Ready to translate data into decisions? Apply today and be part of meaningful projects where your analysis makes a real impact. Let’s decode the future one dataset at a time! 💡📊📌
Posted 1 day ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Nonclinical Data Associate I is an entry level position in the global data management solutions group, learning the tasks required of drafting and finalizing nonclinical datasets with supervision. General Duties Learns appropriate levels and methods of direct contact with clients Learns to prepare form letters and communication text Shadows experienced data associates and joins other client-facing staff (e.g., study director) to attend client site visits and client conference calls Learns to use software tools to efficiently and accurately complete job duties. Software types to include: word processing, spreadsheet, dataset/table generation, collaboration/sharing, and database supportive applications. Dataset Preparation (85%) Trains on duties required to complete dataset preparation. Tasks to be learned may include, but are not limited to: Learning how to obtain and review study documents (e.g., protocol, report) to gather information to prepare datasets Learning to generate and quality check (QC) dataset files to ensure compliance with regulatory requirements Learning to address client inquiries on content of study datasets (5%) May learn to compile metadata to populate databases supporting nonclinical safety assessment. (5%) May train on software testing and validation activities. Initial training will include executing test scripts and maintaining documentation in accordance with Systems Life Cycle methodology which complies with General Principles of Software Validation issued by regulatory agencies. (5%) Performs other duties as assigned and may include shadowing or attending company and/or industry initiatives related to dataset specifications and/or production Experience: 2-3 years of relevant experience required Labcorp Is Proud To Be An Equal Opportunity Employer Labcorp strives for inclusion and belonging in the workforce and does not tolerate harassment or discrimination of any kind. We make employment decisions based on the needs of our business and the qualifications and merit of the individual. Qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), family or parental status, marital, civil union or domestic partnership status, sexual orientation, gender identity, gender expression, personal appearance, age, veteran status, disability, genetic information, or any other legally protected characteristic. Additionally, all qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law. We encourage all to apply If you are an individual with a disability who needs assistance using our online tools to search and apply for jobs, or needs an accommodation, please visit our accessibility site or contact us at Labcorp Accessibility. For more information about how we collect and store your personal data, please see our Privacy Statement.
Posted 1 day ago
0.0 - 1.0 years
7 - 15 Lacs
Hyderabad, Telangana
On-site
We are seeking an experienced Generative AI Developer with strong Python skills and proven hands-on experience building and deploying AI/ML models . In this role, you will work on designing, developing, and deploying innovative Generative AI solutions that create real business impact. You’ll contribute at every stage — from research and prototyping to production deployment. Key Responsibilities: Design, build, and deploy AI/ML models, with a strong focus on Generative AI (e.g., LLMs, diffusion models). Develop robust Python backend services to support AI applications in production. Integrate AI/ML models with scalable APIs and pipelines for real-world use cases. Optimize model performance, accuracy, and inference speed for production workloads. Experiment with model fine-tuning, prompt engineering, and dataset curation. Monitor deployed models and ensure smooth operation and updates. Collaborate with product managers, data scientists, and engineers to translate business needs into AI solutions. Document processes, maintain code quality, and follow best practices for reproducibility and scalability. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong proficiency in Python and relevant frameworks (FastAPI, Flask, Django). Hands-on experience building, training, and deploying AI/ML models into production. Familiarity with Generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs, etc.). Good understanding of NLP, LLMs, and modern AI techniques. Experience with RESTful APIs, cloud services (AWS/GCP/Azure), and CI/CD workflows. Excellent problem-solving, debugging, and troubleshooting skills. Ability to work independently and deliver reliable results on time. Excellent communication and teamwork skills. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you immediate Joiner? Experience: Python: 3 years (Required) Gen AI: 1 year (Required) Work Location: In person
Posted 1 day ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description: Analyse relevant internally and externally sourced data (raw data) to generate BI and Advanced Analytics datasets based on your stakeholders’ requirements Design data pipelines to curate sourced data into the inhouse data warehouse Design data marts to facilitate dataset consumption out of the inhouse data warehouse by business and IT internal stakeholders Design data model changes that align with the inhouse data warehouse standards Define migration execution activities to move data from existing database solutions to the inhouse data warehouse Engineer Regular housekeeping of raw data and data stored in the inhouse data warehouse Build and maintenance of data pipelines and data platforms Build data solution prototypes Explore ways to enhance data quality and reliability Identify and realize opportunities to acquire better data (raw data) Develop analytical tooling to better support BI and Advanced Data Analytics activities Execute data migration from existing databases to the inhouse data warehouse Promote and champion data engineering standards and best-in-class methodology Required Skills: Bachelor’s or master’s degree in Computer Science, Information Technology, Engineering or related quantitative discipline from a top tier university. Certified in AWS Data Engineer Specialty or AWS Solution Architect Associate Snowflake SnowPro Core Certification 7+ years of experience in data engineering or relevant working experience in a similar role, preferably in the financial industry Strong understanding or practical experience of at least one common Enterprise Agile Framework e.g., Kanban, SAFe, SCRUM, etc. Strong understanding of ETL, data warehouse and BI(Qlik) and Advanced Data Analytics concepts Deep knowledge of cloud-enabled technologies – AWS RDS and AWS Fargate, etc. Experience with databases and data warehouses - Snowflake, PostgreSQL, MS SQL Strong programming skills with advanced knowledge of Java and/or Python Practical experience with ETL tools such as AWS Glue, etc. Strong critical-thinking, analytical and problem-solving skills Excellent communicator with team-oriented approach
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The dataset job market in India is booming with opportunities for talented individuals who are skilled in working with data. From data analysts to data scientists, there is a wide range of roles available for job seekers interested in this field. In this article, we will explore the dataset job market in India and provide valuable insights for those looking to kickstart or advance their career in this domain.
These cities are known for their thriving tech industries and are hotspots for dataset job opportunities in India.
The average salary range for dataset professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
A typical career path in the dataset field may look like this: - Junior Data Analyst - Data Analyst - Senior Data Analyst - Data Scientist - Data Architect - Chief Data Officer
In addition to dataset expertise, professionals in this field are often expected to have skills in: - Advanced Excel - SQL - Data visualization tools (e.g., Tableau, Power BI) - Machine learning - Python/R programming
As you navigate the dataset job market in India, remember to hone your skills, stay updated with industry trends, and prepare well for interviews. With determination and dedication, you can land your dream job in this exciting field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough