📖 Step 9: AI/LLM#248 / 291

Context Window

Context Window

📖One-line summary

The maximum amount of tokens a model can read at once.

💡Easy explanation

The maximum amount of text an AI can read at once. Like the number of pages you can lay open on a desk.

Example

Token usage60K / 200K

Exceed the window and the oldest content drops first

Vibe coding prompt examples

>_

I'm building a manual-PDF Q&A on a 200K-context model. Recommend chunk size and how many chunks to retrieve.

>_

Suggest a prompt structure that mitigates the 'lost in the middle' effect on long contexts.

>_

Write pseudocode for monitoring token usage and auto-summarizing/replacing old messages once 80% of the context window is used.

Try these prompts in your AI coding assistant!