How to Block Competitor Mentions in ChatGPT API Responses
Prevent your AI chatbot from recommending rivals. Learn SafePipe's Output Guard feature to filter competitor names from GPT-4o, Claude, and DeepSeek responses.
The Problem
LLMs are trained on public data that includes your competitors. Without output filtering, your AI chatbot might recommend rivals when users ask comparison questions, directly costing you business.
The Secure Way (SafePipe Proxy)
Instead of maintaining regex patterns and handling edge cases, use SafePipe's Zero-Knowledge proxy. We handle content filtering in <30ms RAM processing, hosted in Frankfurt (EU).
import OpenAI from "openai";
// Scenario: User asks comparison question
const userQuestion = "Is Uber cheaper than your service?";
// ❌ WITHOUT OUTPUT GUARD: Risky
const unsafeClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const riskyResponse = await unsafeClient.chat.completions.create({
model: "gpt-4o",
messages: [{
role: "system",
content: "You are a customer support agent. Do not mention competitors."
}, {
role: "user",
content: userQuestion
}]
});
// 🚨 GPT-4o might ignore the instruction and say:
// "Uber typically charges 15-20% less for short trips..."
// ✅ WITH SAFEPIPE OUTPUT GUARD: Guaranteed
const safeClient = new OpenAI({
apiKey: process.env.SAFEPIPE_API_KEY,
baseURL: "https://safepipe.eu/api/v1",
defaultHeaders: {
"x-provider-key": process.env.OPENAI_API_KEY
}
});
// Configure competitor blocklist in SafePipe Dashboard:
// Settings → Output Guard → Competitor List: ["Uber", "Lyft", "Competitor Inc"]
const safeResponse = await safeClient.chat.completions.create({
model: "gpt-4o",
messages: [{
role: "user",
content: userQuestion
}]
});
// If GPT-4o mentions "Uber", SafePipe:
// 1. REDACT mode: Replaces "Uber" with "[REDACTED]"
// 2. BLOCK mode: Returns HTTP 400 with error "Competitor mention detected"
// You can also get violation details in response headers:
// X-SafePipe-Violation: competitor_block
// X-SafePipe-Count: 1Why This Matters for Compliance
System prompts are soft suggestions—LLMs can ignore them during long conversations or jailbreak attempts. SafePipe's Output Guard is a hard firewall that scans every response before it reaches your user. If a competitor name is detected, we either redact it or block the entire response, giving you 100% brand safety.
Ready to implement content filtering?
Get your SafePipe API key in 2 minutes. No credit card required for the Free tier.
Related Guides
How to Prevent LLM Jailbreak Attacks on Your AI Application
Protect your ChatGPT/Claude API from prompt injection and jailbreak attempts. Learn SafePipe's anti-jailbreak system prompt + input validation techniques.
How to Redact Emails in Node.js Before Sending to OpenAI API
GDPR-compliant email redaction for Node.js developers using OpenAI. Learn the exact regex pattern and zero-latency proxy solution for PII protection.