

Price
Model | Input | Output | Context |
---|---|---|---|
GPT-3.5-TURBO | $0.0015 / 1k tokens | $0.003 / 1k tokens | Yes |
GPT-4-TURBO | $0.015 / 1k tokens | $0.045 / 1k tokens | Yes |
GPT-4 | $0.045 / 1k tokens | $0.09 / 1k tokens | Yes |
Why can't I receive the verification code by email?
Many people often have their email address typed wrong, remember to check carefully. Also don't forget to check the spam mailbox. If you still cannot receive the verification code after 5 minutes, it may be that your network cannot access our API server. You can try another network. If you are using Microsoft's mailbox, users around the world generally report that they will lose emails without any notification, stay away.
Token length of models
Each model has a maximum token length, and we know the model's memory capacity because we provide the previous conversation as a prompt to the model. Therefore, token length directly affects the context memory capacity and maximum character length. Currently, GPT-3.5-TURBO is 16k, GPT-4-TURBO is 128k, while all other models have a limit of 8k tokens.
Context
GPT-3.5-TURBO, GPT-4, GPT-4-TURBO support context, which means they will remember the content of the current conversation.