• Resolved raybob80

    (@raybob80)


    I’m dealing with two separate issues and looking for help. I added the plugin and selected GPT4 turbo but when I ask the what version it is using the chatbot responds with 3.0.

    Secondly when I look at my charges in Open AI they are for 4 turbo and one short conversation took .14 cents. All of this seems odd, am I doing something wrong. When I asked GPT to estimate the cost it gave me .06 estimate with one API call, guessing the multiple calls increased it to 14.

    I’m honetly happy with 3.0 performance I just want the data updates for product knowledge purposes.

    Thinking to embed 3.0 with all the product knowledge vs use 4.0 turbo.

Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Support Val Meow

    (@valwa)

    Hey @raybob80! ??

    AI Engine simply calls the OpenAI API. The responses you get don’t depend on the plugin. You can try with the API through the OpenAI Playground and you will see the same result happening from using the GPT-4 model as you can see below:

    Please note that you can’t ask a GPT model how much a request will cost as it’s not aware of this (unless you are using functions and an interpreter to calculate this cost). You can use tools such as OpenAI Tokenizer to help you with this. Concerning how AI Engine deals with cost, you can learn more by reading this documentation: Cost Usage Calculation – AI Engine | Meow Apps

    Thread Starter raybob80

    (@raybob80)

    Thanks for the help, I understand the cost calculation could be wonky and I will check the open AI tokenizer tool against just asking GPT 4. However, when I asked it just counted tokens and assumed only one API call which I felt was a reasonable method.

    Main concern:
    Are the responses in the playground limited to GPT-3? Mainly confused on why I’m getting GPT 3 responses while paying for GPT4 and how I can fix that. GPT 3 won’t work for my purposes.

    For clarity purposes the playground charged me for GPT4, but I was receiving GPT3 responses.

    Thanks!

    • This reply was modified 6 months, 3 weeks ago by raybob80.
    Thread Starter raybob80

    (@raybob80)

    My goal is GPT4 responses, is there something I need to do on my end with my API key? I’m a Product Manager so know a little about tech and thought that the API key might be the issue. I did some research and it seems the key doesn’t regulate the model used and I don’t know how to resolve the issue to get GPT4 responses. Seems like I’m missing something simple?

    • This reply was modified 6 months, 3 weeks ago by raybob80.
    Thread Starter raybob80

    (@raybob80)

    Cool I think y’all made some changes, it seems like the charges are now 3.5 and the “playground” won’t answer what model is being used, I think that’s fair. Do y’all plan on updating the settings so I can use 4 turbo? otherwise I think I have to configure my own plugin and I was so stoked to use this. Guessing I’m an outlier looking to pay more for something that might not be needed by most using a chatbot.

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘API Costs’ is closed to new replies.