Senior Security Research Engineer - AI Security

3 - 7 years

2 - 9 Lacs

Posted:1 day ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

Harness is a high-growth company that is disrupting the software delivery market. Our mission is to enable the 30 million software developers in the world to deliver code to their users reliably, efficiently, securely and quickly, increasing customers’ pace of innovation while improving the developer experience. We offer solutions for every step of the software delivery lifecycle to build, test, secure, deploy and manage reliability, feature flags and cloud costs. The Harness Software Delivery Platform includes modules for CI, CD, Cloud Cost Management, Feature Flags, Service Reliability Management, Security Testing Orchestration, Chaos Engineering, Software Engineering Insights and continues to expand at an incredibly fast pace.
Harness is led by technologist and entrepreneur Jyoti Bansal, who founded AppDynamics and sold it to Cisco for $3.7B. We’re backed with $425M in venture financing from top-tier VC and strategic firms, including J.P. Morgan, Capital One Ventures, Citi Ventures, ServiceNow, Splunk Ventures, Norwest Venture Partners, Adage Capital Partners, Balyasny Asset Management, Gaingels, Harmonic Growth Partners, Menlo Ventures, IVP, Unusual Ventures, GV (formerly Google Ventures), Alkeon Capital, Battery Ventures, Sorenson Capital, Thomvest Ventures and Silicon Valley Bank.

Position Summary

Harness is expanding into DevSecOps with the integration of Traceable, and we're hiring a SeniorSecurity Research Engineer to help lead the charge. This is a rare opportunity to work with visionary leaders like Jyoti Bansal and help shape security across the modern software delivery lifecycle—from code to cloud.

You'll drive research into cutting-edge threats targeting APIs, CI/CD pipelines, and emerging technologies like LLMs. Your work will directly influence product direction, detection capabilities, and customer protection strategies. This is a hands-on, high-impact role where you’ll collaborate across teams, interface with top-tier customers, and represent Harness at leading security conferences.

If you're passionate about solving hard security problems at scale, this role puts you at the center of innovation in a fast-growing DevSecOps platform.

Key Responsibilities

  • Conduct in-depth research into AI and LLM-specific threats, including prompt injection, MCP/A2A vulnerabilities, agentic behavior exploits etc.
  • Analyze, map, and expand upon the OWASP LLM Top 10 and GenAI Security categories, developing real-world reproductions, detections, and mitigations.
  • Design, prototype, and evaluate AI detection and protection products, including prompt firewalls, LLM security filters, behavior-based anomaly detectors, and AI threat classifiers.
  • Study, research and document emerging AI attack trends.
  • Develop and maintain AI attack simulation frameworks, red team automation tools, and automated testing pipelines for evaluating LLM and agentic system security.
  • Collaborate closely with engineering teams to operationalize AI security research into Traceable’s product and pipelines.
  • Perform red-teaming of LLMs and adversarial evaluations of AI-based agents, chatbots, and integrations.
  • Identify and validate detection signals for AI misuse, adversarial activity, and data exfiltration across API, model, and agent layers.
  • Contribute to Traceable’s AI security product roadmap, advising on detection capabilities and new defenses.
  • Participate in industry collaborations (OWASP, CSA, GenAI Security working groups) to strengthen the AI security ecosystem.

Required Skills & Experience

  • 3–7 years in security research, application security, or security engineering.
  • Deep understanding of LLM architectures, AI/ML pipelines, and AI agent frameworks.
  • Strong expertise in AI threats, especially those outlined in the OWASP LLM Top 10 and beyond.
  • Proven experience performing LLM red teaming, prompt injection testing, or adversarial evaluation of AI systems.
  • Experience in mitigating threats in AI and LLM-based systems.
  • Proficient in Python, with hands-on experience developing proof-of-concept exploits, automation tools, or detectors.
  • Familiarity with OWASP standards, including OWASP GenAI Security, OWASP LLM Top 10, OWASP API Top 10, OWASP Top 10.
  • Demonstrated ability to integrate AI or security research into production-grade detection or protection systems.
  • Prior experience building or contributing to automated security testing tools for AI or LLM applications.
  • Working knowledge of Java applications and secure coding practices.
  • Familiarity with building agentic workflows including attack simulation and detection.
  • Experience with vector databases, retrieval-augmented generation (RAG) systems, and model context communication and storage, AI data sharing mechanisms.
  • Strong analytical, communication, and technical documentation skills.
  • Working knowledge of Java applications and secure coding principles.

Nice To Have

  • Contributions to OWASP AI/LLM Top 10, GenAI Security, or open-source AI security projects.
  • Research or engineering experience with LLM firewalls, AI security middleware.
  • Familiarity with LLMOps, AI supply chain security, and model governance frameworks.
  • Background in API security, runtime protection, detection engineering
  • Publications, blogs, or conference talks on AI/LLM security or adversarial machine learning.

Work Location

This role will be out of our Bengaluru, India office on a Hybrid capacity.


Harness In The News:

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex or national origin.

Note on Fraudulent Recruiting/Offers

We have become aware that there may be fraudulent recruiting attempts being made by people posing as representatives of Harness. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers.

Please note, we do not ask for sensitive or financial information via chat, text, or social media, and any email communications will come from the domain @harness.io. Additionally, Harness will never ask for any payment, fee to be paid, or purchases to be made by a job applicant. All applicants are encouraged to apply directly to our open jobs via our website. Interviews are generally conducted via Zoom video conference unless the candidate requests other accommodations.

If you believe that you have been the target of an interview/offer scam by someone posing as a representative of Harness, please do not provide any personal or financial information and contact us immediately at . You can also find additional information about this type of scam and report any fraudulent employment offers via the Federal Trade Commission’s website (, or you can contact your local law enforcement agency.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Harness.io logo
Harness.io

Software Development, DevOps

Los Angeles

RecommendedJobs for You