AI Prompts LLM Optimization

Prompt Engineering for Browser Extension AI Features

E
Extendable Team
· 13 min read

Effective prompt engineering is the difference between an AI feature that delights users and one that frustrates them. Browser extensions have unique opportunities for prompt engineering because they have rich access to user context. This guide covers techniques for crafting prompts that produce reliable, useful outputs.

The Anatomy of an Extension Prompt

Extension prompts typically have four components:

┌────────────────────────────────────────┐
│           System Instructions          │  ← Define behavior, format, constraints
├────────────────────────────────────────┤
│           Page Context                 │  ← Current page content, metadata
├────────────────────────────────────────┤
│           User Context                 │  ← Preferences, history, selections
├────────────────────────────────────────┤
│           User Query                   │  ← What the user wants
└────────────────────────────────────────┘
Context Budget: With typical token limits of 4K-128K, budget your context carefully. System instructions: 200-500 tokens. Page context: 1000-3000 tokens. User context: 200-500 tokens. Reserve the rest for the query and response.

System Instructions

Define clear behavior boundaries and output format:

const systemPrompt = `You are a browser extension assistant that helps users understand web content.

## Capabilities
- Summarize articles and web pages
- Answer questions about page content
- Extract key information (dates, names, prices)
- Explain technical concepts in simple terms

## Constraints
- Only use information from the provided page context
- If information isn't in the context, say "This information isn't on the current page"
- Never make up facts or URLs
- Keep responses concise (under 200 words unless asked for detail)

## Output Format
- Use markdown for formatting
- Use bullet points for lists
- Bold key terms and names
- Include relevant quotes from the source with quotation marks`;

Role-Specific Instructions

Tailor instructions for different features:

const promptTemplates = {
  summarize: `Summarize the following article in 3-5 bullet points.
Focus on: main argument, key evidence, and conclusion.
Do not include your opinions.`,

  explain: `Explain the following concept as if to a smart 12-year-old.
Use analogies when helpful.
Avoid jargon; define technical terms.`,

  extract: `Extract the following information from the page:
- Main topic
- Key people mentioned (with their roles)
- Important dates
- Any prices or numbers mentioned

Format as JSON with null for missing information.`,

  qa: `Answer the user's question based ONLY on the provided page content.
Quote relevant passages to support your answer.
If the answer isn't in the content, say so clearly.`
};

Page Context Injection

Extract and format relevant page content:

function extractPageContext(maxTokens = 2000) {
  const article = document.querySelector('article, main, [role="main"]');
  const content = article ? article.textContent : document.body.textContent;

  // Clean and truncate
  const cleaned = content
    .replace(/\s+/g, ' ')
    .replace(/\n{3,}/g, '\n\n')
    .trim();

  // Rough token estimation (4 chars ≈ 1 token)
  const maxChars = maxTokens * 4;
  const truncated = cleaned.slice(0, maxChars);

  return `## Page Content
Title: ${document.title}
URL: ${window.location.href}

${truncated}${cleaned.length > maxChars ? '\n[Content truncated...]' : ''}`;
}

// Structured extraction for specific page types
function extractStructuredContext() {
  const context = {
    title: document.title,
    url: window.location.href,
    type: detectPageType(),
    content: {}
  };

  switch (context.type) {
    case 'article':
      context.content = extractArticle();
      break;
    case 'product':
      context.content = extractProduct();
      break;
    case 'search':
      context.content = extractSearchResults();
      break;
    default:
      context.content = { text: extractPageContext() };
  }

  return context;
}

function extractArticle() {
  return {
    headline: document.querySelector('h1')?.textContent,
    author: document.querySelector('[rel="author"], .author')?.textContent,
    date: document.querySelector('time')?.getAttribute('datetime'),
    body: document.querySelector('article')?.textContent?.slice(0, 8000)
  };
}

function extractProduct() {
  return {
    name: document.querySelector('[itemprop="name"], .product-title')?.textContent,
    price: document.querySelector('[itemprop="price"], .price')?.textContent,
    description: document.querySelector('[itemprop="description"]')?.textContent,
    rating: document.querySelector('[itemprop="ratingValue"]')?.textContent
  };
}

User Selection Handling

When users select text, use it as the primary context:

function buildSelectionPrompt(selection, intent) {
  const context = `## Selected Text
"${selection}"

## Surrounding Context
${getSurroundingContext(selection)}

## Page Info
Title: ${document.title}
URL: ${window.location.href}`;

  const intents = {
    explain: `Explain the selected text. Define any technical terms.`,
    summarize: `Summarize the key points of the selected text.`,
    translate: `Translate the selected text to [target language].`,
    simplify: `Rewrite the selected text in simpler language.`
  };

  return `${context}\n\n## Task\n${intents[intent] || intents.explain}`;
}

function getSurroundingContext(selection) {
  const range = window.getSelection().getRangeAt(0);
  const container = range.commonAncestorContainer;

  // Get parent paragraph or section
  let context = container;
  while (context && !['P', 'DIV', 'SECTION', 'ARTICLE'].includes(context.tagName)) {
    context = context.parentElement;
  }

  return context?.textContent?.slice(0, 500) || '';
}

Output Format Control

Guide the AI to produce parseable output:

// JSON output for structured data
const extractionPrompt = `Extract information from this page and return ONLY valid JSON:

{
  "title": "string",
  "author": "string or null",
  "date": "ISO date string or null",
  "summary": "2-3 sentence summary",
  "key_points": ["point 1", "point 2"],
  "entities": {
    "people": ["name1", "name2"],
    "organizations": ["org1"],
    "locations": ["place1"]
  }
}

Do not include any text before or after the JSON.`;

// Parse with error handling
async function extractWithFormat(pageContent) {
  const response = await queryAI(extractionPrompt + '\n\n' + pageContent);

  try {
    // Find JSON in response (in case model added extra text)
    const jsonMatch = response.match(/\{[\s\S]*\}/);
    if (!jsonMatch) throw new Error('No JSON found');

    return JSON.parse(jsonMatch[0]);
  } catch (e) {
    console.error('Parse error:', e);
    return { error: 'Failed to parse response', raw: response };
  }
}

Markdown Output

For user-facing content, markdown works well:

const markdownPrompt = `Format your response using markdown:
- Use ## for main sections
- Use **bold** for key terms
- Use \`code\` for technical terms
- Use > for quotes from the source
- Use bullet points for lists

Keep formatting clean and readable.`;
Format Consistency: Include 2-3 examples of the desired output format in your prompt. Models are much more consistent when they can pattern-match against examples.

Few-Shot Examples

Include examples to guide model behavior:

const fewShotPrompt = `Classify the sentiment of product reviews.

Example 1:
Review: "This extension is amazing! It saved me hours of work."
Sentiment: positive
Confidence: high

Example 2:
Review: "Doesn't work as advertised. Crashed my browser twice."
Sentiment: negative
Confidence: high

Example 3:
Review: "It's okay. Does what it says but nothing special."
Sentiment: neutral
Confidence: medium

Now classify:
Review: "${userReview}"
Sentiment:`;

Chain of Thought

For complex tasks, guide the model through steps:

const analysisPrompt = `Analyze this web page for potential issues.

Think through this step by step:

1. CONTENT ANALYSIS
   - What type of content is this?
   - Is the content well-structured?
   - Are there any obvious errors?

2. CREDIBILITY CHECK
   - Is the author identified?
   - Are sources cited?
   - Is the date recent?

3. POTENTIAL ISSUES
   - Any misleading claims?
   - Missing context?
   - Biased language?

4. SUMMARY
   - Overall assessment
   - Key concerns (if any)
   - Recommendations

Provide your analysis for each step, then give a final summary.`;

Handling Edge Cases

Build prompts that handle unexpected inputs gracefully:

function buildRobustPrompt(pageContent, userQuery) {
  return `## Instructions
Answer the user's question based on the page content.

If the page content:
- Is empty or too short: Say "This page doesn't have enough content to analyze."
- Is in a foreign language: Attempt to answer if possible, or note the language barrier.
- Doesn't contain relevant info: Say "I couldn't find information about [topic] on this page."
- Contains conflicting info: Note the contradiction and present both perspectives.

Never:
- Make up information not in the source
- Pretend to access external resources
- Give medical, legal, or financial advice

## Page Content
${pageContent || '[No content extracted]'}

## User Question
${userQuery}

## Answer`;
}

Dynamic Prompt Construction

Build prompts based on context and user preferences:

class PromptBuilder {
  constructor() {
    this.components = [];
  }

  addSystemInstructions(type) {
    const instructions = {
      concise: 'Be brief and direct. Maximum 100 words.',
      detailed: 'Provide comprehensive analysis with examples.',
      simple: 'Explain in simple terms, avoid jargon.',
      technical: 'Use precise technical language.'
    };
    this.components.push(instructions[type] || instructions.concise);
    return this;
  }

  addPageContext(content, maxTokens = 2000) {
    const truncated = this.truncate(content, maxTokens);
    this.components.push(`## Page Content\n${truncated}`);
    return this;
  }

  addUserContext(prefs) {
    if (prefs.expertise) {
      this.components.push(`User expertise level: ${prefs.expertise}`);
    }
    if (prefs.language) {
      this.components.push(`Respond in: ${prefs.language}`);
    }
    return this;
  }

  addQuery(query) {
    this.components.push(`## Query\n${query}`);
    return this;
  }

  addOutputFormat(format) {
    const formats = {
      json: 'Respond with valid JSON only.',
      markdown: 'Format response using markdown.',
      plain: 'Respond in plain text, no formatting.',
      bullets: 'Respond with bullet points only.'
    };
    this.components.push(formats[format] || '');
    return this;
  }

  truncate(text, maxTokens) {
    const maxChars = maxTokens * 4;
    return text.length > maxChars
      ? text.slice(0, maxChars) + '\n[Truncated]'
      : text;
  }

  build() {
    return this.components.join('\n\n');
  }
}

// Usage
const prompt = new PromptBuilder()
  .addSystemInstructions('concise')
  .addPageContext(pageContent, 1500)
  .addUserContext({ expertise: 'intermediate', language: 'English' })
  .addQuery('What is the main argument?')
  .addOutputFormat('bullets')
  .build();

Summary

Effective prompts for browser extensions combine clear instructions, rich context, and explicit output formatting. Test your prompts with various page types and edge cases to ensure consistent results.

Key techniques:

  • Structure prompts with clear sections
  • Extract relevant page context efficiently
  • Use few-shot examples for consistency
  • Handle edge cases explicitly
  • Build prompts dynamically based on context
  • Specify output format precisely