OpenAI | Prompt Caching in the API

Reducing Costs and Latency with Prompt Caching OpenAI has introduced Prompt Caching to reduce costs and improve processing speed for developers who reuse the same context across multiple API calls. By reusing recently seen input tokens, developers can receive a 50% discount and faster prompt processing. This feature is automatically applied to models like GPT-4o, GPT-4o mini, and o1, enhancing efficiency in AI applications.

Source: Prompt Caching in the API | OpenAI


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *