Senior Security Research Engineer - AI Security

3 - 7 years

5 - 9 Lacs

Posted:Just now| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Position Summary

Harness is expanding into DevSecOps with the integration of Traceable, and were hiring a SeniorSecurity Research Engineer to help lead the charge. This is a rare opportunity to work with visionary leaders like Jyoti Bansal and help shape security across the modern software delivery lifecycle from code to cloud.
Youll drive research into cutting-edge threats targeting APIs, CI/CD pipelines, and emerging technologies like LLMs. Your work will directly influence product direction, detection capabilities, and customer protection strategies. This is a hands-on, high-impact role where you ll collaborate across teams, interface with top-tier customers, and represent Harness at leading security conferences.
If youre passionate about solving hard security problems at scale, this role puts you at the center of innovation in a fast-growing DevSecOps platform.
Key Responsibilities
  • Conduct in-depth research into AI and LLM-specific threats, including prompt injection, MCP/A2A vulnerabilities, agentic behavior exploits etc.
  • Analyze, map, and expand upon the OWASP LLM Top 10 and GenAI Security categories, developing real-world reproductions, detections, and mitigations.
  • Design, prototype, and evaluate AI detection and protection products, including prompt firewalls, LLM security filters, behavior-based anomaly detectors, and AI threat classifiers.
  • Study, research and document emerging AI attack trends.
  • Develop and maintain AI attack simulation frameworks, red team automation tools, and automated testing pipelines for evaluating LLM and agentic system security.
  • Collaborate closely with engineering teams to operationalize AI security research into Traceable s product and pipelines.
  • Perform red-teaming of LLMs and adversarial evaluations of AI-based agents, chatbots, and integrations.
  • Identify and validate detection signals for AI misuse, adversarial activity, and data exfiltration across API, model, and agent layers.
  • Contribute to Traceable s AI security product roadmap, advising on detection capabilities and new defenses.
  • Participate in industry collaborations (OWASP, CSA, GenAI Security working groups) to strengthen the AI security ecosystem.
Required Skills Experience
  • 3 7 years in security research, application security, or security engineering.
  • Deep understanding of LLM architectures, AI/ML pipelines, and AI agent frameworks.
  • Strong expertise in AI threats, especially those outlined in the OWASP LLM Top 10 and beyond.
  • Proven experience performing LLM red teaming, prompt injection testing, or adversarial evaluation of AI systems.
  • Experience in mitigating threats in AI and LLM-based systems.
  • Proficient in Python, with hands-on experience developing proof-of-concept exploits, automation tools, or detectors.
  • Familiarity with OWASP standards, including OWASP GenAI Security, OWASP LLM Top 10, OWASP API Top 10, OWASP Top 10.
  • Demonstrated ability to integrate AI or security research into production-grade detection or protection systems.
  • Prior experience building or contributing to automated security testing tools for AI or LLM applications.
  • Working knowledge of Java applications and secure coding practices.
  • Familiarity with building agentic workflows including attack simulation and detection.
  • Experience with vector databases, retrieval-augmented generation (RAG) systems, and model context communication and storage, AI data sharing mechanisms.
  • Strong analytical, communication, and technical documentation skills.
  • Working knowledge of Java applications and secure coding principles.
Nice to Have
  • Contributions to OWASP AI/LLM Top 10, GenAI Security, or open-source AI security projects.
  • Research or engineering experience with LLM firewalls, AI security middleware.
  • Familiarity with LLMOps, AI supply chain security, and model governance frameworks.
  • Background in API security, runtime protection, detection engineering
  • Publications, blogs, or conference talks on AI/LLM security or adversarial machine learning.
Work Location
This role will be out of our Bengaluru, India office on a Hybrid capacity.
  • Harness AI Tackles Software Development s Real Bottleneck
  • After Vibe Coding Comes Vibe Testing (Almost)
  • Startup Within a Startup: Empowering Intrapreneurs for Scalable Innovation - Jyoti Bansal (Harness)
  • Jyoti Bansal, Harness | theCUBEd Awards
  • Eight years after selling AppDynamics to Cisco, Jyoti Bansal is pursuing an unusual merger
  • Harness snags Split.io , as it goes all in on feature flags and experiments
  • Exclusive: Jyoti Bansal-led Harness has raised $150 million in debt financing

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You