Playground for developing ChatGPT plugins
Multi-model and multi-vendor playground for developing ChatGPT plugins (for OAI and other LLMs).
Inspired by the paper of Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.
When performing completion without any plugins selected, your API key is used directly with the API, via HTTP request, of the relevant LLM vendor (OpenAI for example). No other backend services are involved.
When selecting one or more plugins, our IntentSDK attempts to classify the user's intention to pick up one or more plugins and defines how each plugin will be used. Then we fetch the plugin's OpenAPI definition and IntentSDK dynamically figures out how to use the plugin's API. Then it executes an HTTP call to the API's server, though, if the plugin is marked as CORS-protected (meaning the service owner did not allow HTTP calls from any website), we'll funnel the call through a simple proxy server to overcome this. You can start your own local instance of this proxy. Finally, all the information collected as an augmented context and a final prompt combining the user's initial prompt + plugin's generated context and sent to LLM for final completion. The operation of plugin augmented retrieval might be chained if intended.
v0.1 (current):
It is recommended to create a dedicated key so it could be revoked easily later.
$ yarn install
$ yarn start
http://0.0.0.0:3010/
$ cd functions && yarn serve
localStorage.isLocal = true
, to let the FE request your local function.Contributions welcome! Read the contribution guidelines first and submit a Pull Request after Fork this repository.
Please use GitHub Issues to contact us.