@Michael_Markert Working with Groq also.
Hi @Michael_Markert The error null indicates failure to connect to the LLM service. The endpoint you have in your screenshot is /generate, can you check with the chatCompletion endpoint.
Hi @psm Thanks for sharing the browser console details. The issue is possibly happening due to loading delay of files. I have updated the code to handle this case, can you deploy the attached package and try it out.
openrefine-llm-extension-0.1.0.zip (159.2 KB)
@Sunil_Natraj Hello ... I've replaced the older version with this new one. Still no display of LLM providers:
Network error tab : 404 for llm-provider
And console:
Please note that after installation the file llm-provider-Item.html is loacted at:
/home/psm/.local/share/openrefine/3.x/extensions/openrefine-llm-extension-0.1.0/llm-extension/scripts/dialogs and I'm starting OR through ./refine -i 0.0.0.0 -m 20480m -d /home/psm/.local/share/openrefine/3.x/
Thanks for this excellent extension.
Thank you @psm Appreciate your quick response. One additional request can you check the URL for the GET call to load the llm-provider-Item.html and manage-llm.html. both the files are in the same path so it is odd that 1 file fails to load.
Thanks. I got it now.
The script is calling : http://localhost:3333/extension/llm-extension/scripts/dialogs/llm-provider-item.html
Bu the file name is llm-provider-Item.html
The moment I change filename to : llm-provider-item.html, it is displaying -
Thanks for the clue.
Thank you @psm I will fix the file name as well. Thanks for all your support
I tried it with a v1/chat/completion endpoint that is also working via curl and still get the error. I will try it on another computer.
Can you enable logging and send me the log file.
@archilecteur @Michael_Markert @psm
The LLM definition has been extended to support Top-P & Seed value. Also included a help page which includes suggested values by use case. Updated extension package also attached for testing.
openrefine-llm-extension-0.1.0.zip (160.3 KB)
@Sunil_Natraj The explanations how to combine 'Temperature' and "Top_P' are very helpful. Could you plz explain the concept of 'Seed' value to control randomness?
Hi @psm I have updated the help guide Let me know if you need more information.
Thanks. It's quite comprehensive now.
Regards
Hi, I tried it on another Mac, still the same issue, both Macs on OpenRefine 3.9 now. Browser console shows no issues. I had a look at the POST request for llm-connect in the browser and it is "providerLabel=ollama&subCommand=test&csrf_token=vrsXJO6l7EvyiYm1N1MMLILmgqgMCxHn". After a look at manage-llm.js I would expect all the form params in the request, am I wrong?
Best
Michael
Hi @Michael_Markert I think there is a bug in the flow; Can you first Save and then do the Test connection. I will fix the code to handle test without save.
Could you please try this directly without testing? This test failure happened to me once, but it still worked with "Extract using AI."
Best
You are right, @Sunil_Natraj, I saved without testing and everything is up and running now! Love it already and will show the extension during my next OpenRefine workshop(s).
@Sunil_Natraj Today I was trying to include my RAG system with this plugin. The RAG can response against API call like :
curl -X POST http://localhost:5000/api/complete -H "Content-Type:
application/json" -d '{ "model":
"meta/llama-3.3-70b-instruct", "message": "what
is Pragyan in the context of Chandrayaan-3?", "temperature":
0.0, "max_tokens": 100 }'`
Pragyan is the rover of Chandrayaan-3, a lunar mission by the Indian Space Research Organisation. It is a six-wheeled robotic vehicle designed to explore the lunar surface, conducting experiments and gathering data on the lunar geology, composition, and atmosphere. The name Pragyan means "wisdom" in Sanskrit. The rover is equipped with instruments to study the lunar regolith, rocks, and soil, and is capable of navigating through the lunar terrain, and transmitting data back.
It has all details like model, API endpoint, keys etc in its .env file though I repeated these in the data elements needed for the plugin.
Testing and Preview generation both shows this error:
LLM request failed. Status Code : 400. Message : {"error":"No message provided."}
Any clue?
Hi, The AI extension supports the chat Completion API request / response model, The RAG API endpoint has a different request / response model which explains the failure.
RAG supports a Q&A flow, how do you plan to use an RAG service in the OpenRefine flow?
I have created V0.1 release of the extension. Really appreciate the feedback from @psm @Michael_Markert @archilecteur.