Is OpenAI TPM per model?

Yeah, TPM (tokens per minute) limits are set per model and also depend on your OpenAI plan. For example, GPT-4 has lower TPM limits than GPT-3.5 unless you're on a higher-tier plan. So each model has its own cap, and if you're using multiple models, they usually track usage separately. If you're using the API, you can check your limits in your OpenAI dashboard.
 
Back
Top