Skip to content
@Mindgard

Mindgard

Mindgard is the leading provider of AI red teaming and security testing solutions.

Mindgard is the leading provider of AI Red Teaming solutions. Spun out from over a decade of AI security research at Lancaster University and headquartered in Boston and London, Mindgard helps enterprises secure their AI models, agents, and applications across the AI lifecycle.

AI introduces risks that traditional security tools cannot detect, leaving organizations unable to find, measure, or secure their AI. Security teams struggle with a lack of visibility into AI activity and its attack surfaces. Difficulty reproducing agentic AI behavior creates uncertainty and compliance challenges. Ultimately, an inability to enforce AI controls heights the risk of compromise.

Mindgard delivers AI detection and response through attack-driven defense, giving enterprises the ability to map their AI attack surface, measure and validate AI risk, and actively defend their AI.

Popular repositories Loading

  1. prompt_jailbreak prompt_jailbreak Public

    This repository demonstrates the use of a prompt jailbreak to expose information within a system prompt. Specifically, we target any LLM hosted on HuggingFace Inference Endpoints.

    Python 12 1

  2. document-rce-llm-agent document-rce-llm-agent Public

    This repository demonstrates the use of a Langchain Agent to carry out Remote Code Execution (RCE). Specifically, it involves opening a reverse shell on a target device hosting the Agent.

    Python 9 3

  3. pyLumo pyLumo Public

    Secure Python API, CLI, and TUI for Proton Lumo with E2E encryption

    Python 8 1

  4. hidden-audio-jailbreaks hidden-audio-jailbreaks Public

    This repository includes samples of audio provided to different chatbots. Some of these samples have been modified to contain concealed messages. When these altered audios are converted by an audio…

    7 3

  5. pickle-injection-tooling pickle-injection-tooling Public

    Python 6 2

  6. mindgard-burp-extension mindgard-burp-extension Public

    Burp Intruder generator for running Mindgard tests against a chatbot

    Java 4 2

Repositories

Showing 10 of 13 repositories
  • mindgard-burp-extension Public

    Burp Intruder generator for running Mindgard tests against a chatbot

    Mindgard/mindgard-burp-extension’s past year of commit activity
    Java 4 MIT 2 0 0 Updated Feb 16, 2026
  • mindgard-github-action-example Public

    Example github action adding a mindgard check to an MLOps pipeline

    Mindgard/mindgard-github-action-example’s past year of commit activity
    2 MIT 0 0 0 Updated Feb 4, 2026
  • .github Public
    Mindgard/.github’s past year of commit activity
    0 0 0 0 Updated Jan 22, 2026
  • pyLumo Public

    Secure Python API, CLI, and TUI for Proton Lumo with E2E encryption

    Mindgard/pyLumo’s past year of commit activity
    Python 8 1 0 1 Updated Jan 9, 2026
  • Mindgard/public-resources’s past year of commit activity
    0 0 0 0 Updated May 21, 2025
  • openai-llm-guard-proxy Public

    A mindgard CLI compatible OpenAI proxy with LLM-Guard input and output checking

    Mindgard/openai-llm-guard-proxy’s past year of commit activity
    Python 0 1 0 0 Updated Apr 7, 2025
  • PyRIT Public Forked from Azure/PyRIT

    The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.

    Mindgard/PyRIT’s past year of commit activity
    Python 0 MIT 696 0 0 Updated Mar 7, 2025
  • chatbot-api-wrapper Public

    Exposes a JSON API to enable testing a web chatbot with the Mindgard CLI.

    Mindgard/chatbot-api-wrapper’s past year of commit activity
    Java 0 MIT 0 0 0 Updated Dec 10, 2024
  • mindgard-interview Public

    Exercise for interview candidates

    Mindgard/mindgard-interview’s past year of commit activity
    TypeScript 0 0 0 0 Updated Oct 16, 2024
  • hidden-audio-jailbreaks Public

    This repository includes samples of audio provided to different chatbots. Some of these samples have been modified to contain concealed messages. When these altered audios are converted by an audio-to-text model feeding into a large language model (LLM), they trigger a jailbreak.

    Mindgard/hidden-audio-jailbreaks’s past year of commit activity
    7 3 0 0 Updated May 8, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Most used topics

Loading…