Prompts in WebMCP are reusable message templates that AI agents can retrieve and use to guide interactions. They provide a standardized way to create consistent AI conversations.
Prompts are part of the Model Context Protocol (MCP) specification. WebMCP implements prompts for browser environments, enabling web applications to expose conversational templates to AI agents.
Overview
Concept Description Purpose Generate pre-formatted messages for AI interactions Registration Via registerPrompt() or provideContext() Arguments Optional schema-validated parameters Output Array of role-based messages (user/assistant)
When to Use Prompts
Standardized Interactions Create consistent conversation starters across your application
Template Messages Generate parameterized messages with validated arguments
Workflow Guidance Guide AI agents through specific interaction patterns
Context Injection Inject application-specific context into AI conversations
Basic Registration
Register a simple prompt without arguments:
// Register a greeting prompt
navigator . modelContext . registerPrompt ({
name: 'greeting' ,
description: 'A friendly greeting to start a conversation' ,
async get () {
return {
messages: [
{
role: 'user' ,
content: {
type: 'text' ,
text: 'Hello! How can you help me today?'
},
},
],
};
},
});
Prompts with Arguments
Use argsSchema to accept validated parameters:
navigator . modelContext . registerPrompt ({
name: 'code-review' ,
description: 'Request a code review with syntax highlighting' ,
argsSchema: {
type: 'object' ,
properties: {
code: {
type: 'string' ,
description: 'The code to review'
},
language: {
type: 'string' ,
description: 'Programming language' ,
enum: [ 'javascript' , 'typescript' , 'python' , 'rust' ],
default: 'javascript'
},
focus: {
type: 'string' ,
description: 'Specific areas to focus on' ,
enum: [ 'performance' , 'security' , 'readability' , 'all' ],
default: 'all'
}
},
required: [ 'code' ],
},
async get ( args ) {
const { code , language = 'unknown' , focus = 'all' } = args ;
return {
messages: [
{
role: 'user' ,
content: {
type: 'text' ,
text: `Please review this ${ language } code with focus on ${ focus } : \n\n\`\`\` ${ language } \n ${ code } \n\`\`\` ` ,
},
},
],
};
},
});
Multi-Message Prompts
Return multiple messages to establish conversation context:
navigator . modelContext . registerPrompt ({
name: 'debug-session' ,
description: 'Start an interactive debugging session' ,
argsSchema: {
type: 'object' ,
properties: {
error: { type: 'string' , description: 'The error message' },
context: { type: 'string' , description: 'Additional context' },
},
required: [ 'error' ],
},
async get ( args ) {
return {
messages: [
{
role: 'user' ,
content: {
type: 'text' ,
text: `I'm encountering this error: ${ args . error } ` ,
},
},
{
role: 'assistant' ,
content: {
type: 'text' ,
text: 'I understand you \' re facing an error. Let me help you debug it step by step. First, can you tell me what you were trying to do when this occurred?' ,
},
},
{
role: 'user' ,
content: {
type: 'text' ,
text: args . context || 'Here \' s the additional context...' ,
},
},
],
};
},
});
How AI Agents Use Prompts
When an AI agent wants to use a prompt, it calls getPrompt() through the MCP protocol. This invokes your registered get() handler:
// AI agent calls getPrompt('code-review', { code: '...', language: 'javascript' })
// Your handler receives the arguments and returns messages
navigator . modelContext . registerPrompt ({
name: 'code-review' ,
argsSchema: { /* ... */ },
async get ( args ) {
// This handler is called when AI requests the prompt
console . log ( 'AI requested code review for:' , args . language );
return {
messages: [ /* ... */ ]
};
},
});
The getPrompt() method is called by AI agents through the MCP protocol, not directly from your application code. Your get() handler is invoked automatically when an agent requests the prompt.
Listing Available Prompts
const prompts = navigator . modelContext . listPrompts ();
prompts . forEach ( prompt => {
console . log ( ` ${ prompt . name } : ${ prompt . description } ` );
if ( prompt . arguments ) {
console . log ( ' Arguments:' , prompt . arguments );
}
});
Dynamic vs Static Registration
Dynamic (registerPrompt)
Static (provideContext)
Use registerPrompt() for prompts that may be added/removed at runtime: // Register dynamically
const registration = navigator . modelContext . registerPrompt ({
name: 'dynamic-prompt' ,
// ...
});
// Unregister when no longer needed
registration . unregister ();
Use for:
Context-dependent prompts
Feature-specific interactions
Prompts tied to component lifecycle
Use provideContext() for base prompts that should always be available: navigator . modelContext . provideContext ({
prompts: [
{ name: 'help' , /* ... */ },
{ name: 'feedback' , /* ... */ },
],
});
Use for:
Core application prompts
Always-available interactions
Base prompt set
Schema Validation
Prompts support both JSON Schema and Zod for argument validation:
argsSchema : {
type : 'object' ,
properties : {
topic : {
type : 'string' ,
description : 'The topic to discuss' ,
minLength : 1 ,
maxLength : 100
},
depth : {
type : 'string' ,
enum : [ 'basic' , 'intermediate' , 'advanced' ],
default : 'intermediate'
}
},
required : [ 'topic' ]
}
import { z } from 'zod' ;
argsSchema : {
topic : z . string ()
. min ( 1 )
. max ( 100 )
. describe ( 'The topic to discuss' ),
depth : z . enum ([ 'basic' , 'intermediate' , 'advanced' ])
. default ( 'intermediate' )
. describe ( 'Discussion depth level' )
}
Best Practices
Choose prompt names that clearly indicate purpose. Use kebab-case for multi-word names:
code-review instead of codeReview
bug-report instead of report
Descriptions help AI agents decide when to use a prompt: // Good
description : 'Request a security-focused code review with vulnerability analysis'
// Too vague
description : 'Review code'
Always use argsSchema for prompts with parameters. Include descriptions for each property to help AI agents provide correct values.
Each prompt should serve a single purpose. Create multiple prompts instead of one complex multi-purpose prompt.
Verify prompts work correctly with actual AI integrations. Check that:
Arguments are passed correctly
Messages are formatted as expected
AI responses are appropriate
Common Patterns
Contextual Prompts
Inject application state into prompts:
navigator . modelContext . registerPrompt ({
name: 'help-with-current-page' ,
description: 'Get help with the current page content' ,
async get () {
const pageTitle = document . title ;
const pageUrl = window . location . href ;
return {
messages: [{
role: 'user' ,
content: {
type: 'text' ,
text: `I need help understanding this page: " ${ pageTitle } " ( ${ pageUrl } )` ,
},
}],
};
},
});
Role-Based Templates
Different prompts for different user interactions:
// For developers
registerPrompt ({
name: 'explain-technical' ,
description: 'Get a technical explanation' ,
// ... detailed technical response
});
// For end users
registerPrompt ({
name: 'explain-simple' ,
description: 'Get a simple explanation' ,
// ... simplified response
});