• Resolved Radovan

    (@radovand)


    Hi,

    In the case of the ChatGpt bot, for certain text from the content of the questions, the error was generated: ”This model’s maximum context length is 4097 tokens. However, you requested 4893 tokens (3393 in the messages, 1500 in the completion). Please reduce the length of the messages or completion.” I assume the errors occurred for texts that were embedded in indexed pages.

    This error occurred, after I had indexed all articles with the purpose of using “Embeddings”. I also wanted to index all pages but it didn’t work (probably because of the theme I use – Flatsome).

    The error disappeared, after deleting all indexes from the Index Builder.

Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Author senols

    (@senols)

    Hi Radovan (@radovand),

    The issue likely stems from embedding huge content in the indexed pages, as the model has a maximum token limit of 4097.

    For each post, we recommend embedding around 200-400 words to avoid exceeding the token limit. If you have embedded a large amount of content, you may run into errors like the one you experienced.

    To resolve this issue, you dont need to delete all your indexed pages.. You just need to delete the ones that has huge content in it.

    To help you understand how tokens work in this context, think of a token as a unit of text. The model has three main components: the prompt, the completion, and the maximum tokens allowed. The prompt is the input text, the completion is the generated text, and the maximum tokens represent the total number of tokens allowed for both the prompt and completion combined.

    o resolve this issue, please ensure your embedded content falls within the recommended word count of 200-400 words per post.

    I am planning to add some kind of split mechanism to split huge content in separate chunks while indexing.

    Let me know if you have any further questions or need assistance.

    Best regards, Senol

    Thread Starter Radovan

    (@radovand)

    Thank you for the explanation.

    Radovan D.

    Hi

    when can we expect split mechanism to split huge content in separate chunks while indexing?

    Please let me know

    Thanks

    Yes, would love to know ETA on the splitting feature.

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘Error generated by ”Embeddings”’ is closed to new replies.