AI Research Internship

0 years

0 Lacs

Posted:15 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Internship

Job Description

AI Research Intern – Lexsi Labs

Commitment:

Start Date:


About Lexsi Labs


Lexsi Labs is one of the leading frontier labs focusing on building aligned, interpretable and safe Superintelligence. Most of the work involves on creating new methodologies for efficient alignment, interpretability lead-strategies and tabular foundational model research. Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safety.


Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence. At Lexsi.ai, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional performance.

As a research intern at Lexsi.ai, you will be uniquely positioned in our team to work on very large-scale industry problems and push forward the frontiers of AI technologies. You will become a part of the unique atmosphere where startup culture meets research innovation, with key outcomes of speed and reliability.


What You’ll Do

We work on multiple frontier research ideas and challenges. If you are selected, you would be working on one of these following areas. Collaborate closely with our research and engineering teams on one of the areas:

  • Library Development:

    Architect and enhance open-source Python tooling for alignment, explainability, model alginment, uncertainty quantification, robustness, and machine unlearning
  • Explainability & Trust:

    Improve and find new observations using our and other SOTA XAI techniques (DLB, LRP, SHAP, Grad-CAM, Backtrace) across text, image, and tabular modalities to understand and present new model interpretability.
  • Mechanistic Interpretability:

    Probe internal model representations and circuits—using activation patching, feature visualization, and related methods—to diagnose failure modes and emergent behaviors.
  • Uncertainty & Risk:

    Develop, implement, and benchmark uncertainty estimation methods (Bayesian approaches, ensembles, test-time augmentation) alongside robustness metrics for foundation models.
  • Tabular Foundational Models (Orion):

    Work with our leading Tabular Foundational Model team to improve and launch new tabular foundational model architectures and work on our leading opesource library TabTune.
  • Reinforcement Learning

    : Explore new ideas and algorithm around RL and our new RL fine-tuning library.
  • Research Contributions:

    Author and maintain experiment code, run systematic studies, and co-author whitepapers or conference submissions.


General Required Qualifications

  • Strong Python expertise:

    writing clean, modular, and testable code.
  • Theoretical foundations:

    deep understanding of machine learning and deep learning principles with hands-on experience with PyTorch.
  • Transformer architectures & fundamentals:

    comprehensive knowledge of attention mechanisms, positional encodings, tokenization and training objectives in BERT, GPT, LLaMA, T5, MOE, Mamba, etc.
  • Version control & CI/CD:

    Git workflows, packaging, documentation, and collaborative development practices.
  • Collaborative mindset:

    excellent communication, peer code reviews, and agile teamwork.


Preferred Domain Expertise (Any one of these is good) :

  • Explainability:

    applied experience with XAI methods such as DLB, SHAP, LIME, IG, LRP, DL-Bactrace or Grad-CAM.
  • Mechanistic interpretability:

    familiarity with circuit analysis, activation patching, and feature visualization for neural network introspection.
  • Uncertainty estimation:

    hands-on with Bayesian techniques, ensembles, or test-time augmentation.
  • Quantization & pruning:

    applying model compression to optimize size, latency, and memory footprint.
  • LLM Alignment techniques:

    crafting and evaluating few-shot, zero-shot, and chain-of-thought prompts; experience with RLHF workflows, reward modeling, and human-in-the-loop fine-tuning.
  • Tabular Foundational Models:

    Should have used or improved TFMs like Orion, TabPFN, TabICL etc
  • Post-training adaptation & fine-tuning:

    practical work with full-model fine-tuning and parameter-efficient methods (LoRA, adapters), instruction tuning, knowledge distillation, and domain-specialization.


Additional Experience (Nice-to-Have)

  • Publications:

    contributions to CVPR, ICLR, ICML, KDD, WWW, WACV, NeurIPS, ACL, NAACL, EMNLP, IJCAI or equivalent research experience.
  • Open-source contributions:

    prior work on AI/ML libraries or tooling.
  • Domain exposure:

    risk-sensitive applications in finance, healthcare, or similar fields.
  • Performance optimization:

    familiarity with large-scale training infrastructures.


What We Offer

  • Real-world impact:

    address high-stakes AI challenges in regulated industries.
  • Compute resources:

    access to GPUs, cloud credits, and proprietary models.
  • Competitive stipend:

    with potential for full-time conversion.
  • Authorship opportunities:

    co-authorship on papers, technical reports, and conference submissions.


Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You