Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role We are looking for Sr.AI and Machine Learning engineer who want to help shape the future of Financial Services clients and our company. As part of the team, you will get to · Work directly with our founding team and be a core member. · Apply the latest AI techniques to solve real problems faced by Financial Services clients. · Design, build, and refine datasets to evaluate and continuously improve our solutions. · Participate in strategy and product ideation sessions, influencing our product and solution roadmap. Key Responsibilities · Agentic AI Development : Work on building scalable multi-modal Large Language Model (LLM) based AI agents, leveraging frameworks such as LangGraph, Microsoft Autogen, or Crewai. · AI Research and Innovation : Research and build innovative solutions to relevant AI problems, including Retrieval-Augmented Generation (RAG), semantic search, knowledge representation, tool usage, fine-tuning, and reasoning in LLMs. · Technical Expertise : Proficiency in a technology stack that includes Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, and React. · LLM and NLP Experience : Hands-on experience working with LLMs, RAG architectures, Natural Language Processing (NLP), or applying Machine Learning to solve real-world problems. · Dataset Development : Strong track record of building datasets for training and/or evaluating machine learning models. · Customer Focus : Enjoy diving deep into the domain, understanding the problem, and focusing on delivering value to the customer. · Adaptability : Thrive in a fast-paced environment and are excited about joining an early-stage venture. · Model Deployment and Management : Automate model deployment, monitoring, and retraining processes. · Collaboration and Optimization : Collaborate with data scientists to review, refactor, and optimize machine learning code. · Version Control and Governance : Implement version control and governance for models and data. Required Qualifications: · Bachelor's degree in computer science, Software Engineering, or a related field · 4-8 years of experience in MLOps, DevOps, or related roles Have strong programming experience and familiarity with Python based deep learning frameworks like Pytorch, JAX, Tensorflow Have strong familiarity and knowledge of machine learning concepts · Proficiency in cloud platforms (AWS, Azure, or GCP) and infrastructure-as-code tools like Terraform Desired Skills: · Experience with experiment tracking and model versioning tools You have experience with technology stack: Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, React. · Knowledge of data pipeline orchestration tools like Apache Airflow or Prefect · Familiarity with software testing and test automation practices · Understanding of ethical considerations in machine learning deployments · Strong problem-solving skills and ability to work in a fast-paced environment
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Delhi, India
On-site
Overview The Quality Control (QC) Specialist will play a critical role in ensuring the accuracy, consistency and quality of annotated legal documents. This role involves reviewing annotations created by annotators, validating metadata, and ensuring that each document meets the defined standards for tagging and categorisation. The QC Specialist will collaborate closely with annotators, tech team and project leads to maintain high standards across annotations for judgments, legal provisions, and opinions. Key Expectations from Role Quality Assurance and Accuracy Control Develop and implement quality metrics to ensure high standards for each annotated document, measuring accuracy rates, consistency, and adherence to guidelines. Identify and document any deviations from annotation standards, ensuring that each error is flagged, documented, and tracked to facilitate corrective actions. Perform random sampling and focused audits to verify annotation quality and reduce error rates across the project, providing a continuous check on data integrity. Annotation Review and Verification Conduct thorough reviews of annotations of judgments, legal provisions, and opinions to verify that annotations adhere to established standards. Cross-check each annotation for core legal elements, including case name, date, court name, statutory references, legal principles, and issues to ensure completeness. Ensure that metadata such as case identifiers, statutes cited, and legal doctrines are accurately and consistently tagged according to project guidelines. Validate that complex legal relationships are accurately identified in each document and recorded in standardized manner for ease of cross linking (e.g., judgments overruled, followed, referred, distinguished). Validate documents are timestamped correctly by annotators (e.g.: Date of pronouncement, date of publication of a notification in official gazette, date when provisions from the document came in effect from, etc.) Holistic Metadata Validation and Standardization Validate the accuracy and relevance of metadata extracted by annotators or automated tools, ensuring all extracted information meets required standards. Ensure uniformity across metadata fields such as case numbers, legal provisions, key legal outcomes, and cross-references. Confirm that all metadata follows the standardized structure and is formatted correctly for integration into the AI tool. Segmentation and Categorization Review Verify that judgments and legal provisions are segmented into appropriate categories, aligning with project requirements for legal classification (e.g., civil, criminal, appeal, petition). Ensure accurate tagging for legal relationships, including overruled, upheld, distinguished, or followed cases, to improve search functionality within the AI tool. Assess each document for proper categorization, ensuring uniform application of legal classifications across the dataset to enhance the AI tool’s contextual understanding. Feedback and Annotator Support Provide timely, constructive feedback to annotators on identified errors, suggesting corrective actions and reinforcing annotation standards. Document recurring errors or misunderstandings to guide annotator training, working with project leads to enhance guidelines and training materials. Quality Metrics Tracking and Reporting Maintain detailed records of quality metrics, documenting error rates, accuracy scores, and the frequency of specific annotation issues. Prepare regular quality reports summarizing review outcomes, highlighting any recurring issues and proposing corrective actions. Develop insights from quality control data to recommend improvements in annotation processes and guidelines, enhancing overall project quality. Collaboration and Process Improvement Collaborate with the Project Manager, Annotation Team Lead, and Technical Support teams to continually improve annotation standards and quality control processes. Share findings with cross-functional teams, contributing to the refinement of annotation guidelines, project workflows, and validation protocols. Actively participate in project meetings, providing insights on annotation quality, challenges, and recommendations for improvement. Documentation and Compliance Maintain meticulous records of all quality checks, ensuring that each document review is traceable, and that feedback is consistently documented. Adhere to data handling and confidentiality protocols, ensuring that all judgments, legal provisions, and opinions are reviewed in compliance with the firm’s standards for data security and privacy. Contribute to the development and refinement of QC documentation, including quality checklists, review protocols, and feedback guidelines. Recommended Qualifications Education : Bachelor’s degree in law, or a related field. Additional certifications in legal research or legal analytics are advantageous. Experience : Minimum of 2-3 years in a quality control, legal research, or data management role. Preferred : Experience in a legal knowledge management team with experience in document tagging, legal publishing house, or as a legal editor. Proven familiarity with legal document management, annotation standards, and legal taxonomy. Knowledge : Strong understanding of legal terminology and principles. Familiarity with annotation standards for legal texts. Knowledge of metadata standards and legal research databases is a plus. Skills : Exceptional attention to detail and analytical skills. Familiarity with legal research tools and annotation software. Holistic understanding of how legal data points interconnect and form a part of larger datasets Strong communication skills for providing feedback and training to annotators. Ability to work independently and collaboratively in a fast-paced project environment. Attributes : High degree of accuracy, ability to meet deadlines, and a commitment to maintaining confidentiality of legal documents.
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi All, We are looking for PowerBI hashtag#PowerBI Developer hashtag#BI Developer Job Title: Power BI Developer Location: Chennai Working Conditions Hybrid; 2 Days Office Attendance Job Summary: In this position, you will be responsible for designing, developing and maintaining business intelligence solutions using MS Power BI for Survey programs. The primary role is to transform raw data into interactive visual reports that help stakeholders make data driven decisions. Responsibilities: · Data Modeling and Integration · Dashboard development and Maintenance · Performance Optimization · Security and Access control · Quality Asurance · Collaboration and Stakeholder engagement · Automation Continuous Improvement: Knowledge, Competencies and Skills: o Technical skills (Required) § Microsoft Office tools – Excel, Power Query, Power Pivot § PowerBI Service · Data modeling (Power BI) · Concept of shared dataset / datamodel and thin reports. · Composite models · Embedded models · Live connections · Direct queries § Advanced use of DAX and M, and able to decide when it is preferable to favor one over the other § SQL § Power Apps, specifically Power Automate o Technical skills (Preferred) § Dax Studio § ALM Toolkit § SSMS o Non-technical § Ability to discuss requirements with non-technical colleagues § Ability to analyze complex requests with minimal supervision, strong problem-solving skills § Excellent verbal and written communication skills including the ability to manage and influence stakeholders § Good time management skills and the ability to successfully manage multiple, concurrent deliverables § Good analytical and problem-solving skills § Effective interpersonal skills; ability to work with virtual teams § Ability to interface with all levels of the company and with functional disciplines § Ability to prioritize and execute independently
Posted 3 weeks ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Operations Analysis Analyst Bangalore, India Operations Group 316385 Job Description About The Role: Grade Level (for internal use): 07 Department overview S&P Global, EBS is specialist provider of managed and installed data services, delivering world-class data, technology and service solutions focusing on the complex and evolving Index and ETF data needs. Used in the front, middle and back office by the world’s leading Investment Banks, Asset Managers, Fund Administrators, Prime Brokers and Hedge Funds. Position summary The successful candidate will play a key part in maintaining the smooth running of the day-to-day operations of EBS data offering and working with cross functional teams to identify solutions in problem areas to remove operational inefficiencies. On the data enhancement aspect, the candidate will use advanced Excel, VBA and SQL skills translating operational requirements into technical solutions and tools. The team operates 24/7, thus interested candidates will be required to work in all shifts including US hours. Duties & accountabilities New hire needs to be well versed with index concepts and their calculations. Validate the accuracy of data received from various sources. Ensure that this information is stored in databases and is accurately reflected on products by creating or running data quality checks and standards. Ensure the quality and time-efficient production of financial information to respective products. Respond to data queries from both internal and external clients and provide support to stakeholders. Monitor and research market events in order to anticipate changes. Ensure a deep understanding of the markets and business events. Work with and involve cross functional teams to provide Root Cause Analysis to identify solutions in problem areas. Consolidate information around the dataset leading to the establishment of best practices. Perform automated/semi-automated checks to ensure production of high quality content. Ensure MOW’s are documented and maintained. Coordinate and Delegate work as per team requirements Identify data quality improvement projects, and good design practices Intermediate Excel and SQL skills, including being able to write basic SQL queries. Proven ability to utilize data and systems tools available Good verbal, written, and presentation skills. Education and experience MBA (Finance) / Post Graduate or equivalent in ideally Finance. The candidate should have a good understanding of equities & capital markets. Specific knowledge around Index/ETF and Corporate Actions highly preferred. 0-2 years of business operations experience and must be flexible in addressing dynamic business needs. Commercial awareness : Must have a strong interest in finance and be up to date with current global financial market news. Management requirements : NA Personal competencies Personal impact : The candidate must be a self-starter, able to take on multiple tasks at a time, hardworking, and efficient. Communication : Must demonstrate superior communication skills and is expected to interact professionally across business units within the company. Teamwork : Being a team player is a vital aspect of the position, and it is expected that the candidate will work well individually, as well as in a global team environment About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - OPRTON203 - Entry Professional (EEO Job Group) Job ID: 316385 Posted On: 2025-07-09 Location: Bangalore, Karnataka, India
Posted 3 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Selected Intern's Day-to-Day Responsibilities Work actively on computer vision and deep learning applications Work on challenging problem statements to fine-tune models with huge datasets Work on improving model accuracy and precision for CCTV cameras Compare the accuracy and performance of different CV models for a particular task Understand and work with Vision-Language Models (VLMs) Perform image annotation tasks to enhance dataset quality About Company: Faclon Labs is a Mumbai-headquartered deep-tech IoT company founded by IIT-Bombay alumni. Over the years, we have evolved into a one-stop and highly relevant IoT & AI company to drive digital transformation and Industry 4.0 among enterprises. Having a focus on deep technologies and continuous innovations, Faclon has become one of the leading companies in the IoT space with an aggressive portfolio of customers in India and overseas.
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
1. AI Video Creator Role Overview: Produce hyper‑realistic, luxury‑level videos using AI tools like MidJourney, RunwayML, Sora, Pika, Kling AI. Key Responsibilities: Generate high-end, brand-driven video content for real estate category. Collaborate with creative and marketing teams on campaigns, launches, editorials Author and refine prompts to ensure consistent, on‑brand visual outputs Stay current on AI and industry trends to enhance workflows Requirements: Proven portfolio in AI-assisted content within real estate Proficiency with AI tools (MidJourney, DALL·E, RunwayML) + traditional editing software (Photoshop, After Effects, Blender, etc.) Basic Python/LLM experience for creative ideation Gen‑AI video spots end‑to‑end—from prompt through compositing to final render. Highlights: Own quality control: ensure content is loop‑free, flicker‑free, “cringe‑free” Design prompt workflows, shot lists, and apply compositing, scripting, VFX Collaborate to blend AI output seamlessly with creative vision Desired Skills: Skilled in Premiere, After Effects, etc. Deep understanding of generative AI “quirks” (temporal drift, sync issues) Bonus: Python scripting for automation, or dataset curation for brand consistency Create videos using tools like Synthesia, HeyGen, Pictory, Runway, etc. Develop scripts/prompts/storyboards for AI-generated visuals Preferred Skills: Proven portfolio in AI tool‑based video production Familiarity with Adobe/Premiere or motion graphics tools Basic generative AI knowledge + optional voice‑cloning/sync experience Category Description Core Responsibilities Use generative AI to produce videos; refine prompts/scripts; post‑process outputs. Collaborations Tight integration with creative/marketing teams. Essential Skills Proficiency in AI video tools, standard editing software, prompt engineering. Bonus Skills Python, VFX, dataset management, voice‑synthesis knowledge. Formats & Types Roles vary between freelance, remote, full-time, and regional (US/India). Portfolios Required Strong, relevant AI‑driven video work is essential
Posted 3 weeks ago
2.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company Overview of role and associated job description. Economic Sanctions and Screening team responsible for developing, enhancing, and validating sanctions screening frameworks and models. This role plays a pivotal part in fortifying the firm's financial crime risk posture by ensuring our screening capabilities align with global regulatory expectations and industry best practices. About the Role This role is integral to the Economic Sanctions and Screening team, focusing on the development and validation of sanctions screening frameworks. Responsibilities Design and Development: Develop, test, and optimize sanctions screening frameworks including name and payment screening methodologies. Model Enhancement: Evaluate existing sanctions models (fuzzy matching algorithms, rules logic, thresholds) and propose enhancements based on regulatory guidance and operational feedback. Framework Review: Conduct periodic and event-driven reviews of sanctions screening models to ensure continued relevance and compliance with OFAC, EU, UN, and other regulatory standards. Scenario Calibration and Tuning: Support tuning and threshold analysis for match scoring, leveraging historical alert data and false positive metrics. Data Analytics and Insights: Analyze screening outcomes to identify gaps, trends, and patterns that inform risk mitigation strategies and enhance effectiveness. Documentation and Governance: Prepare and maintain comprehensive model documentation, validation reports, and technical specifications in line with model risk governance frameworks. Stakeholder Engagement: Collaborate with compliance officers, technology teams, and business stakeholders to gather requirements, explain model logic, and support audits and regulatory reviews. Qualifications Education: Bachelor's Degree / Master's Degree Required Skills Domain Expertise: 2-10 years of hands-on experience in sanctions screening framework development, tuning, and validation. Familiarity with leading screening platforms (e.g., FircoSoft, Bridger, Actimize, Oracle Watchlist Screening, etc.) and list management practices. In-depth understanding of global sanctions regimes (OFAC, EU, UN, HMT) and related regulatory expectations. Experience in integrating sanctions screening models with broader AML/CFT frameworks. Exposure to AI/ML techniques for entity resolution or fuzzy matching optimization. Prior involvement in regulatory examinations or independent validations of screening tools. Technical Proficiency: Strong programming and scripting skills (Python, R, SQL, SAS). Experience in data modeling, scoring logic calibration, and large-scale dataset analysis. Analytical Thinking: Ability to conduct root cause analysis on alert quality issues. Strong quantitative and qualitative problem-solving capabilities. Communication: Strong written and verbal communication skills, including the ability to explain technical models to non-technical stakeholders. Ability to craft data-backed narratives and present recommendations with clarity. Preferred Skills Experience in regulatory examinations or independent validations of screening tools. Base location: Chennai. This profile involves working from to client location.
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Client Type: US Client Location: Remote The hourly rate is negotiable. About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement.
Posted 4 weeks ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Location: Bangalore Job Description This role supports international business units and the corporate FP&A function. Key responsibilities include preparing financial reports, KPIs, dashboards, forecasts, budgets, and ad hoc analyses. The role also contributes to monthly financial reviews by creating visualizations and ensuring commentary aligns with data and FP&A insights. Responsibilities & Duties This role will be responsible for: Create, maintain, and distribute recurring financial reports (monthly, quarterly, annual) for corporate and business unit reviews. Build dashboards and visualizations to track KPIs such as revenue, gross margin, OpEx, and EBITDA. Ensure timely, accurate reporting by coordinating with accounting, operations, and other stakeholders. Develop and maintain financial models to support budgeting, forecasting, and strategic planning. Analyze performance variances and translate financial data into operational insights. Monitor bookings, backlog, and sales metrics across currencies to provide global revenue analysis. Assist in developing and managing budgets, forecasts, and long-term financial plans for International. Document standardized FP&A processes and reporting procedures. Drive process improvements to enhance data accuracy and reporting efficiency. Support ad hoc analyses, including customer profitability, pricing, and investment evaluation. Ideal Qualifications Advanced Excel skills, including complex formulas (e.g., INDEX-MATCH, SUMIFS, XLOOKUP), array functions, and dynamic named ranges for scalable modeling and large dataset analysis. Strong analytical and problem-solving abilities with a focus on interpreting financial data and generating actionable insights. High attention to detail and a structured, logical approach to solving unfamiliar or ambiguous problems. Solid grasp of financial statements and accounting principles, applied in forecasting, budgeting, and variance analysis. Clear and effective communicator, able to present complex financial concepts to both financial and non-financial audiences. Proficient in building dashboards and visualizations in Power BI (or similar tools) to support data-driven decision-making. Experience: Bachelor’s degree in finance, Accounting, Business, or a related field. 2-3 years of Accounting/Finance experience Strong organizational skills; capable of managing multiple priorities and deadlines in a dynamic environment. Self-starter with adaptability, quick learning, and the ability to work independently. - Comfortable working with databases and multiple data sources across reporting systems.
Posted 4 weeks ago
4.0 years
8 - 12 Lacs
India
On-site
Job Description We are seeking a skilled and passionate Machine Learning Engineer or AI Model Developer with a minimum of 4 years of hands-on experience in building, training, and deploying custom machine learning models. The ideal candidate is someone who thrives on solving real-world problems using custom-built AI models, rather than relying solely on pre-built solutions or third-party APIs. Natural Abilities Smart, self motivated, responsible and out of the box thinker. Detailed oriented and powerful analyzer. Great writing and communication skills. Requirements: • 4+ years of experience designing, developing, and deploying custom machine learning models (not just integrating APIs) Strong proficiency in Python and ML libraries such as NumPy, pandas, scikit-learn, etc. Expertise in ML frameworks like TensorFlow, PyTorch, Keras, or equivalent. Solid understanding of ML algorithms, model evaluation techniques, and feature engineering. Experience in data preprocessing, model optimization, and hyperparameter tuning. Hands-on experience with real-world dataset training and fine-tuning. Experience in using Amazon SageMaker for model development, training, deployment, and monitoring. Familiarity with other cloud-based ML platforms (AWS, GCP, or Azure) is a plus. Responsibilities: • Design, develop, and deploy custom machine learning models tailored to business use cases. Train, validate, and optimize models using real-world datasets and advanced techniques. Build scalable, production-ready ML pipelines from data ingestion to deployment. Leverage AWS SageMaker to streamline model training, testing, and deployment workflows. Work closely with product and engineering teams to integrate models into applications. Evaluate models using appropriate metrics and continuously improve performance. Maintain proper documentation of experiments, workflows, and outcomes. Stay up to date with the latest ML research, tools, and best practices. Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,200,000.00 per year Schedule: Day shift Monday to Friday Experience: ML/DS: 5 years (Required) Location: Adajan, Surat, Gujarat (Preferred) Work Location: In person Expected Start Date: 20/07/2025
Posted 4 weeks ago
3.0 years
0 Lacs
India
Remote
About Us: Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI. Project Overview: We're building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software consultancy tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process. Why This Role Is Unique: Collaborate directly with AI researchers shaping the future of AI-powered software development. Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks. Influence dataset design that will train and benchmark next-gen LLMs. What does day-to-day look like: Review and compare 3–4 model-generated code responses for each task using a structured ranking system. Evaluate code diffs for correctness, code quality, style, and efficiency. Provide clear, detailed rationales explaining the reasoning behind each ranking decision. Maintain high consistency and objectivity across evaluations. Collaborate with the team to identify edge cases and ambiguities in model behavior. Required Skills: At least 3 years of experience at top-tier product or research companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify, Intuit, PayPal, or research roles at IBM, GE, Honeywell, Schneider, etc.), with a total of 7+ years of overall professional software engineering experience. Strong fundamentals in software design, coding best practices, and debugging. Excellent ability to assess code quality, correctness, and maintainability. Proficient with code review processes and reading diffs in real-world repositories. Exceptional written communication skills to articulate evaluation rationale clearly. Prior experience with LLM-generated code or evaluation work is a plus. Bonus Points: Experience in LLM research, developer agents, or AI evaluation projects. Background in building or scaling developer tools or automation systems. Engagement Details: Commitment: ~20 hours/week (partial PST overlap required) Type: Contractor (no medical/paid leave) Duration: 1 month (starting next week; potential extensions based on performance and fit) Rates: $40–$100/hour, based on experience and skill level.
Posted 4 weeks ago
2.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location : Pune, Gurgaon Job Description We are looking for experienced Calibration Engineer, capable of leading and technically developing a team of Application Engineers, who will be assigned to the following responsibilities: Manage the assigned Application Projects/Workpackages fulfilling the defined constraints on timing, quality and costs. Define and update the Application project plan for the assigned Projects. Manage with pro-active approach the project plan changes during the development phase. Lead and coordinate a team of application engineers allocated to the Application Projects. Manage calibration datasets integration and evolution Adopt calibration methodologies, procedures and tools shared by Marelli HQ. Promote effective solutions together with Marelli HQ Application and Functions Design Teams Guarantee the compliance of calibration process workflow with the standards defined by Marelli HQ. Promote the use of statistical analysis and big data management, in cooperation with Marelli HQ, to validate strategies performance and diagnosis robustness. Increase the technical skills of the resources and promote their professional growth. Customer technical reference for all the issues related to calibration. Support on Customer site for calibration activities development, when requested. Confirm with the Car Maker for the process of calibration via label review. Coordinate the activities on the test development vehicles assigned to each project. Take part to calibration design review - risk analyses with Marelli HQ Team or with the Customers. Analysis and resolution of vehicle fleets and vehicle market concerns Qualifications: Bachelor's Degree in Mechanical or Electronic Engineering (or equivalent). Minimum 2-10 years of experience as an Application Engineer with a focus on base engine calibration at the engine test bench. Calbration Experience in Gasoline / CNG / Bi-Fuel Engines Proven experience in emissions, drivability, and OBD diagnosis. Prior experience as an Application Project Leader and Calibration Dataset Manager is highly preferred. In-depth knowledge of engine control systems. Strong understanding of OBD2 System Calibration and Regulations. Proficiency in MS Office Suite and ETAS Inca. Experience with Matlab and Python programming languages is a plus. Excellent problem-solving and analytical abilities. Ability to work independently and as part of a team. Strong organizational and time-management skills. A proactive and results-oriented approach. Should be ready to travel when needed.
Posted 4 weeks ago
5.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location : Pune, Chennai Job Description We are looking for experienced Calibration Engineer, capable of leading and technically developing a team of Application Engineers, who will be assigned to the following responsibilities: Manage the assigned Application Projects/Workpackages fulfilling the defined constraints on timing, quality and costs. Define and update the Application project plan for the assigned Projects. Manage with pro-active approach the project plan changes during the development phase. Lead and coordinate a team of application engineers allocated to the Application Projects. Manage calibration datasets integration and evolution Adopt calibration methodologies, procedures and tools shared by Marelli HQ. Promote effective solutions together with Marelli HQ Application and Functions Design Teams Guarantee the compliance of calibration process workflow with the standards defined by Marelli HQ. Promote the use of statistical analysis and big data management, in cooperation with Marelli HQ, to validate strategies performance and diagnosis robustness. Increase the technical skills of the resources and promote their professional growth. Customer technical reference for all the issues related to calibration. Support on Customer site for calibration activities development, when requested. Confirm with the Car Maker for the process of calibration via label review. Coordinate the activities on the test development vehicles assigned to each project. Take part to calibration design review - risk analyses with Marelli HQ Team or with the Customers. Analysis and resolution of vehicle fleets and vehicle market concerns Qualifications: Bachelor's Degree in Mechanical or Electronic Engineering (or equivalent). Minimum 5-10 years of experience as an Application Engineer with a focus on base engine calibration at the engine test bench. Proven experience in emissions, drivability, and OBD diagnosis. Prior experience as an Application Project Leader and Calibration Dataset Manager is highly preferred. In-depth knowledge of engine control systems. Strong understanding of OBD2 System Calibration and Regulations. Proficiency in MS Office Suite and ETAS Inca. Experience with Matlab and Python programming languages is a plus. Excellent problem-solving and analytical abilities. Ability to work independently and as part of a team. Strong organizational and time-management skills. A proactive and results-oriented approach. Should be ready to travel when needed.
Posted 4 weeks ago
2.0 - 5.0 years
7 - 7 Lacs
Pune, Chennai
Hybrid
KYC CANDIDATES PLEASE DONT APPLY Job Title: FCC Transaction Monitoring Investigator Job description: Good understanding of AML Transaction Monitoring regulations Monitoring financial transactions for suspicious activity and identify financial crime related red flags flagged as potentially suspicious transactions or exceptions AML analysts must be able to analyze large datasets, identify patterns, and detect anomalies that may indicate suspicious activity Ability to analyze and interpret information exercising sound judgement and attention to detail Ability to conduct thorough research into customer records, public domain information, transaction details, and other relevant information is vital for investigations Effective communication, both written and verbal, essential for reporting findings, collaborating, and interacting Years 2-4 years experience mandatory 1. Good understanding of AML Transaction Monitoring regulations Monitoring financial transactions for suspicious activity and identify financial crime related red flags flagged as potentially suspicious transactions or exceptions AML analysts must be able to analyze large datasets, identify patterns, and detect anomalies that may indicate suspicious activity Ability to analyze and interpret information exercising sound judgement and attention to detail Ability to conduct thorough research into customer records, public domain information, transaction details, and other relevant information is vital for investigations Effective communication, both written and verbal, essential for reporting findings, collaborating and interacting. 2. Good to have Actimize skills 3. Shift timing: 7 AM to 10 PM (Between shift) 4. 3 days in office , 2 days wfh 5 days working
Posted 4 weeks ago
2.0 - 5.0 years
6 - 7 Lacs
Chennai
Hybrid
KYC CANDIDATES PLEASE DONT APPLY Job Title: FCC Transaction Monitoring Investigator Job description: Good understanding of AML Transaction Monitoring regulations Monitoring financial transactions for suspicious activity and identify financial crime related red flags flagged as potentially suspicious transactions or exceptions AML analysts must be able to analyze large datasets, identify patterns, and detect anomalies that may indicate suspicious activity Ability to analyze and interpret information exercising sound judgement and attention to detail Ability to conduct thorough research into customer records, public domain information, transaction details, and other relevant information is vital for investigations Effective communication, both written and verbal, essential for reporting findings, collaborating, and interacting Years 2-4 years experience mandatory 1. Good understanding of AML Transaction Monitoring regulations Monitoring financial transactions for suspicious activity and identify financial crime related red flags flagged as potentially suspicious transactions or exceptions AML analysts must be able to analyze large datasets, identify patterns, and detect anomalies that may indicate suspicious activity Ability to analyze and interpret information exercising sound judgement and attention to detail Ability to conduct thorough research into customer records, public domain information, transaction details, and other relevant information is vital for investigations Effective communication, both written and verbal, essential for reporting findings, collaborating and interacting. 2. Good to have Actimize skills 3. Shift timing: 7 AM to 10 PM (Between shift) 4. 3 days in office , 2 days wfh 5 days working Interested candidates please mail on meghana.narasimhan@kiya.ai or call 9082501996
Posted 4 weeks ago
5.0 years
0 Lacs
India
Remote
About the projects: we are building LLM evaluation and training datasets to train LLM to work on realistic software engineering problems. One of our approaches, in this project, is to build verifiable SWE tasks based on public repository histories in a synthetic approach with human-in-the-loop; while expanding the dataset coverage to different types of tasks in terms of programming language, difficulty level, and etc. About the Role: We are looking for experienced software engineers (tech lead level) who are familiar with high-quality public GitHub repositories and can contribute to this project. This role involves hands-on software engineering work, including development environment automation, issue triaging, and evaluating test coverage and quality Why Join Us? We are one of the world’s fastest-growing AI companies accelerating the advancement and deployment of powerful AI systems. You’ll be at the forefront of evaluating how LLMs interact with real code, influencing the future of AI-assisted software development. This is a unique opportunity to blend practical software engineering with AI research. What does day-to-day look like : Analyze and triage GitHub issues across trending open-source libraries. Set up and configure code repositories, including Dockerization and environment setup. Evaluating unit test coverage and quality. Modify and run codebases locally to assess LLM performance in bug-fixing scenarios. Collaborate with researchers to design and identify repositories and issues that are challenging for LLMs. Opportunities to lead a team of junior engineers to collaborate on projects. Required Skills: Minimum 5+ years of overall experience Strong experience with at least one of the following languages: Rust Proficiency with Git, Docker, and basic software pipeline setup. Ability to understand and navigate complex codebases. Comfortable running, modifying, and testing real-world projects locally. Experience contributing to or evaluating open-source projects is a plus. Nice to Have: Previous participation in LLM research or evaluation projects. Experience building or testing developer tools or automation agents. Mandatory Skills:' 3 -4+ years of relevant software development experience. Rust - min 3+ yrs of exp. This is a short term remote contract opportunity that requires working at least 40 hours in week the US Pacific Time Zone. If this role suits you, please email your resume to admin@amrapali.org.in. kindly mention your current CTC, expected CTC and Notice Period in your email
Posted 4 weeks ago
3.0 years
0 Lacs
India
Remote
About Us: Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI. Project Overview: We're building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software consultancy tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process. Why This Role Is Unique: Collaborate directly with AI researchers shaping the future of AI-powered software development. Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks. Influence dataset design that will train and benchmark next-gen LLMs. What does day-to-day look like: Review and compare 3–4 model-generated code responses for each task using a structured ranking system. Evaluate code diffs for correctness, code quality, style, and efficiency. Provide clear, detailed rationales explaining the reasoning behind each ranking decision. Maintain high consistency and objectivity across evaluations. Collaborate with the team to identify edge cases and ambiguities in model behavior. Required Skills: At least 3 years of experience at top-tier product or research companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify, Intuit, PayPal, or research roles at IBM, GE, Honeywell, Schneider, etc.), with a total of 7+ years of overall professional software engineering experience. Strong fundamentals in software design, coding best practices, and debugging. Excellent ability to assess code quality, correctness, and maintainability. Proficient with code review processes and reading diffs in real-world repositories. Exceptional written communication skills to articulate evaluation rationale clearly. Prior experience with LLM-generated code or evaluation work is a plus. Bonus Points: Experience in LLM research, developer agents, or AI evaluation projects. Background in building or scaling developer tools or automation systems. Engagement Details: Commitment: ~20 hours/week (partial PST overlap required) Type: Contractor (no medical/paid leave) Duration: 1 month (starting next week; potential extensions based on performance and fit) Rates: $40–$100/hour, based on experience and skill level.
Posted 4 weeks ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. Responsibilities Job Description Take ownership of client accounts, including responsibility for implementation work that requires designing and delivering bespoke data models tailored to each client’s needs. Develop and implement new data logic and workflows in response to evolving client requirements, ensuring timely and high-quality delivery. Manage multiple projects in parallel, balancing hands-on contribution with oversight and prioritisation across initiatives. Build strong working relationships across teams and locations, collaborating effectively to achieve client and business goals. Solve complex analytical and data-related challenges independently, delivering practical, client-focused solutions. Communicate confidently and clearly with clients, internal teams, prospects, and senior leaders, providing product updates, analytical insights, and project progress. Actively contribute to the team’s innovation culture by proposing new initiatives or improvements to existing processes, products, and methodologies. Experience, Skills & Qualifications 2 to 4 years of experience in Financial Service industry, Business analyst, product management, project management or similar technical roles. Degree in Finance, Business or Analytics preferred. Advantageous if having experience on Data Management or Business Intelligence tool such as (SQL, Python, Tableau, Power BI, Alteryx and R) Advantageous if having good understanding of capital market concepts and familiarity of dataset used by Hedge fund, Private equity and Real assets industry. Highly proficient MS Excel skills Excellent communication and presentation skills Composed, experienced and professional in client interaction A logical mind, capable of implementing and improving structured processes Ability to find creative solutions to complex analytical problems Demonstrable experience of tailoring solutions to meet specific requirements, and subsequently attempting to automate Unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.
Posted 4 weeks ago
6.0 years
7 - 10 Lacs
Bengaluru
On-site
Requisition ID: 7579 Bangalore, India Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. We are building teams that are designing, developing, and manufacturing next-generation energy technologies and our work environment is fast-paced, fun, and full of exciting new projects. If you are passionate about advancing a more sustainable future, this is the perfect time to join Enphase! The Sr. Data Scientist will be responsible for analyzing product performance in the fleet. Provides support for the data management activities of the Quality/Customer Service organization. Collaborates with Engineering/Quality/CS teams and Information Technology. What You Will Do Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models. Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data. Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features. Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training. Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment. Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks. Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration. Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction. Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes. Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems. Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders. 6+ years of experience in data science with a significant focus on predictive maintenance and anomaly detection. Who You Are and What you Bring Bachelor’s or Master’s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately
Posted 4 weeks ago
8.0 years
8 - 10 Lacs
Bengaluru
On-site
Requisition ID: 8090 Bangalore, India Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. About the role At Enphase - Warranty Engineer, your work won't just sit in a stat tools/dashboards — it will drive decisions at the highest levels. You'll have the opportunity to present insights directly to the CEO, VPs, and senior leadership, influencing strategy and shaping the future of our products and operations. This is a high-impact role for those who don’t just crunch numbers — they connect the dots, accountable, tell compelling stories with data, and turn insights into action. If you're looking to work on meaningful problems, move fast, and be seen — this is where you belong. The Warranty Engineer will identify customer warranty claims and costs and coordinate the division-wide effort to reduce Customer warranty claims with components and warranty liability. To identify, analyze, and reduce customer warranty claims, costs and incidents to eliminate warranty liability. Works directly with customers to gain quick resolution to all product complaints and will be responsible for analyzing and interpreting product quality data through the entire product lifecycle. Provides essential support for the data management activities of the Quality organization. Collaborates with Engineering /CS teams and Information Technology. What you will do: Monitor customer warranty activity, customer complaints Corroborate failure types and root causes through analysis of warranty return Assemble teams from division-wide resources to investigate the true cause of warranty claims and actual warranty failures Manage warranty Improvement projects to address and resolve the most significant sources of warranty claims and warranty liability Provide all relevant feedback and lessons learned as inputs to engineering Process warranty data into statistical reliability models Knowledge in AIAG core tools like FMEA, SPC, MSA, PPAP, APQP, Control plan is essential. Perform warranty reliability “weibull analysis” to get failure rates, TTF (time to fail) simulation, cumulative DPPM simulation Report all warranty results and progress regularly to our primary customers and within the division 8D Manager with ownership of the 8D system/process and the primary driver of regular open 8D reviews. Provide support for production issues as prevention to warranty. Functions as a resource to departments and process improvement teams in the areas of quality data management Work with stakeholders throughout the Quality organization to identify opportunities for leveraging company data to drive product/process improvements Work with cross functional teams to understand their needs and run query to support their analyses Develop deep knowledge of the dataset and able to drill down and wide Run SQL/R/Python queries on various databases to identify patterns and diagnose problems Identifies opportunities for automation, monitoring and visualization using automated tools Who you are and what you bring: BE/B.Tech degree or higher in Engineering, or Computer Science Minimum 8+ years of work experience preferred in automotive, automotive supplier, solar or other related industries in the field of warranty engineering. Also working experience in Process Quality and Customer quality. Engineering-level understanding of statistics and reliability Outstanding team building and communication skills are necessary, as the position will be required to work with Plant engineering, Product engineering, Quality, Finance and Supply Chain. Disciplined 8D problem solving / failure analysis of materials and processes, including manufacturing issue troubleshooting Minitab/ JMP or other statistical analysis tool working experience must Six Sigma Black Belt / Shainin Red X Apprentice / Shainin Red X Journeymen preferred Design, develop and implement data-driven strategies. Knowledge in programming/query languages such as Python/R/SQL
Posted 4 weeks ago
8.0 years
5 - 9 Lacs
Bengaluru
On-site
Requisition ID: 8103 Bangalore, India Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. If you are passionate about advancing a more sustainable future, this is the perfect time to join Enphase! About the role At Enphase - Staff Data Analyst, your work won't just sit in a dashboard — it will drive decisions at the highest levels. You'll have the opportunity to present insights directly to the CEO, VPs, and senior leadership, influencing strategy and shaping the future of our products and operations. This is a high-impact role for those who don’t just crunch numbers — they connect the dots, tell compelling stories with data, and turn insights into action. If you're looking to work on meaningful problems, move fast, and be seen — this is where you belong. The Staff Data Analyst will be responsible for analyzing and interpreting product quality data through the entire product lifecycle. Provides essential support for the data management activities of the Quality organization. Collaborates with Engineering /CS teams and Information Technology. What you will do Function as a resource to departments and process improvement teams in the areas of quality data management Work with stakeholders throughout the Quality organization to identify opportunities for leveraging company data to drive product/process improvements Work with cross functional teams to understand their needs and run query to support their analyses Develop deep knowledge of the dataset and able to drill down and wide Run SQL/R/Python queries on various databases to identify patterns and diagnose problems Identify opportunities for automation, monitoring and visualization using automated tools Who you are and what you bring BE/B.Tech degree or higher in Engineering, or Computer Science 8+ year's experience in data science required Expertise in programming/query languages such as Python (Matplotlib, Plotly libraries)/R/SQL Design, develop and implement data-driven strategies and machine learning models Able to understand various data structures and common methods in data transformation Expertise in delivering analysis results using custom made visualizations Ability to interpret and summarize presentations for executives Strong verbal and written communication skills required
Posted 4 weeks ago
0 years
4 - 6 Lacs
Bengaluru
On-site
The Content Operations and Managed Services is basically responsible for providing financial and non-financial information to the clients on companies across the globe. Environment, Social and Corporate Governance team (ESG) provides integrated financial and extra-financial information, which is essential for understanding the long-term performance and risk profile of major Corporations. A Content Analyst must research on the documents relating to the company and collect the Environmental and Social data from the sourced documents. They should also handle Queries pertaining to team and provide expert advice on specific content knowledge relating to quality concepts and techniques in order to drive improvement activities within the content collection team and quality & service groups. Their responsibility also includes to deliver on all tasks as per the strategic plans set for the ESG team. Also must have hands on experience with the database and should be able to handle database queries independently. ESSENTIAL RESPONSIBILITIES To be responsible for database administration and sourcing of exchange and contributed data. To be up to date with the latest collection policies and procedures and other significant developments pertaining to different markets which would impact data collection. To assist in projects with emphasis on data content. To review the work of the analyst/data provider. Take responsibility for data quality, accuracy, timeliness and completeness within the team. To take on the responsibility of resolving client queries in a timely manner, with very high standards of quality of response and customer satisfaction. Should be able to take right decisions quickly and independently. To coach analysts on data collection policies and procedures. To continuously find ways to improve quality and reduce number of service requests from clients and to help meet Service targets. To continuously suggest improvement ideas on collection tools, internal processes and should suggest permanent fixes as well as on-going actions for preventing client queries. To carry out special projects or specialized tasks related to the ESG dataset as per the strategic plans. To effectively collaborate with all global team and the data provider to provide high quality data to our clients. Cultivate a learning and positive work environment. Communicate & liaise with internal and external departments for resolution of data queries, including direct contact with customers. Improve efficiencies in work practices by implementing automation where possible. Maximizes usage of available tools/support to best of advantage to maintain/improve content quality during daily operations. Completion of client requests and ongoing projects for respective areas. Ensures data quality, accuracy, timeliness and completeness per company standards for their covered product/market. Adhere to change control procedures, Content Operations standards and current data policies and procedures. Responsible for quality improvement. SCOPE & IMPACT Researching on Company website, Filings to understand the reports and to be able to accurate identify the data to be captured. Accountable for the quality of own work, delivered within agreed time-frames and quality standards. Quality of work is reviewed by manager to ensure it is delivered within set parameters of timeliness, accuracy and completeness. Works independently within defined area, taking ownership of problems within own area of knowledge. Makes recommendations for process improvements to immediate manager. Decisions taken are made within guidelines set by others. TECHNICAL / PROFESSIONAL SKILLS & COMPETENCIES Effectively communicate both internally & externally. Should be able to work under pressure. There would be a need to over-stretch during the busy season to manage volumes. Strong understanding of Financial Markets/ NEWS / Current happenings and detail-oriented. Awareness of news and general knowledge is beneficial Excellent attention to details, eager to learn new things and multitasking. Has Strong knowledge of dataset, product and data flow. Bachelor’s degree required. Flexibility with 24/5 shifts LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 4 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.
Posted 4 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Key Responsibilities: Data Extraction: Extract data from diverse sources while ensuring accuracy and completeness. Data Transformation: Perform data cleaning, validation, and apply business rules to transform raw data into a structured format for analysis. Data Loading: Load transformed data into target systems and design efficient data models and workflows. ETL Process Management: Design, develop, implement, and maintain ETL processes to integrate data efficiently into data warehouses or analytics platforms. Performance Optimization: Optimize and tune ETL processes for performance improvements, monitor jobs, and troubleshoot production issues. Data Quality and Governance: Ensure the quality, integrity, and compliance of data according to organizational and regulatory standards. Collaboration & Documentation: Work with business stakeholders to understand data requirements, document ETL workflows, and ensure proper communication. Tool-Specific Responsibilities: Leverage DataStage for designing and building complex ETL jobs. Use Azure Data Factory for scalable cloud-based integration and orchestration. Develop and maintain solutions for Snowflake data warehousing. Utilize SQL Server to manage data extraction and transformation processes. Implement DataStage Sequencers , Parallel Jobs, Aggregators, Joins, Merges, Lookups, etc. Provide support in resolving integration-related production issues following the change management process. Key Focus: Ensuring efficient, accurate, and secure data flow for the organization’s data warehousing and analytics needs. Must-Have Skills: Education: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. ETL Tools: 7+ years of hands-on experience in DataStage (V8.5 or higher) . Expertise in DataStage V11.3 and 8.7 versions. Strong experience in DataStage design and parallel jobs (e.g., Aggregator, Merge, Lookup, Source dataset, Change Capture). Advanced knowledge of UNIX and shell scripting . Azure Data Factory (ADF): 3+ years of experience in designing, developing, and managing Azure Data Factory pipelines . Proficient in using ADF connectors for integration with different data sources and destinations. Experience in ADF Data Flows and pipeline orchestration. Database & SQL: 7+ years of experience in Microsoft SQL Server , including experience in writing and optimizing SQL queries . 3+ years of experience in DB2 UDB Administration and Support . Experience in creating and managing SQL Server Agent jobs and SSIS packages . Hands-on experience in Data warehousing solutions and data modeling with SQL Server. Data Quality & Governance: Ability to ensure high data integrity and governance throughout ETL processes. Good to Have Skills: Experience with Snowflake data warehouse solutions. Familiarity with cloud-based ETL tools and technologies. Knowledge of Kafka (Basic Understanding) for stream processing and integration. Experience with Report Solution/Design and building automated reports using SQL Server and other reporting tools. Experience with implementing Data Security and Compliance processes in ETL. Role Requirements: Problem-Solving Skills: Ability to troubleshoot issues related to ETL processes and data integration. Collaboration: Ability to work effectively in a cross-functional team with business analysts, data engineers, and other stakeholders. Attention to Detail: Strong focus on ensuring the accuracy and consistency of data throughout the ETL pipeline. Communication: Excellent communication skills for documentation and reporting purposes.
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Annotation Quality Lead Experience: 5+ Years Location: Hyderabad Notice Period: 15 Days Less (OR) Immediate A Day in the Life • Oversee quality checks, which involve interdisciplinary and intradisciplinary datasets, as well as randomized assessments to ensure the accuracy and consistency of annotations. • Perform intra- and inter-annotator quality control measures to detect and correct discrepancies within the annotation workflow. • Conduct regular audits to ensure compliance with standardized annotation practices. • Work closely with the Annotation Team to support ongoing quality assurance of annotation infrastructure. • Assist in creating and maintaining a Gold Standard database with reliable labels for quality benchmarking. • Conduct randomized checks to ensure annotation quality across the dataset and assess annotation performance. • Coordinate with the Annotation Program Liaison to review project objectives and quality standards. • Define appropriate sample sizes and quality control datasets, working directly with scientists and engineers on each study. • Lead training sessions or competency evaluations to ensure adherence to annotation standards prior to project initiation. Must Have • Bachelor’s or Master’s degree in Radiology, Medical Imaging, Pharmacy, Biomedical Engineering, or a related field, statistical background is a plus. • Expertise in medical imaging modalities (CT, MRI, X-ray, ultrasound) and proficiency with medical annotation tools (e.g., ITK-SNAP, 3D Slicer, Labelbox, V7 Labs). • Strong understanding of clinical anatomy, pathology, and medical terminology • Proven experience in medical image annotation, quality assurance with deep familiarity with DICOM, PACS, and other medical imaging systems, or a similar role within data-driven environments. • Strong analytical skills and meticulous attention to detail. • Effective communication and collaboration skills for cross-functional teamwork. • Familiarity with annotation tools and quality control software. • Knowledge of regulatory requirements, including HIPAA and GDPR compliance, for handling medical data. Principal Working Relationship • Reports to AI / Data Science Manager. • Collaborates closely with annotation specialists, scientists, and engineers across interdisciplinary teams. Nice to Haves • Experience with AI-driven medical image analysis models and deep learning techniques. • Knowledge of automated annotation tools or semi-automated pipelines. • Familiarity with clinical domains like oncology, cardiology, or pathology is a plus.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France