Skip to content

Server.cpp: Documentation of JSON return value of /completion endpoint #3632

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 17, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 36 additions & 6 deletions examples/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,25 +106,25 @@ node index.js

## API Endpoints

- **POST** `/completion`: Given a prompt, it returns the predicted completion.
- **POST** `/completion`: Given a `prompt`, it returns the predicted completion.

*Options:*

`prompt`: Provide the prompt for this completion as a string or as an array of strings or numbers representing tokens. Internally, the prompt is compared to the previous completion and only the "unseen" suffix is evaluated. If the prompt is a string or an array with the first element given as a string, a `bos` token is inserted in the front like `main` does.

`temperature`: Adjust the randomness of the generated text (default: 0.8).

`top_k`: Limit the next token selection to the K most probable tokens (default: 40).

`top_p`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.95).

`n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. When 0, no tokens will be generated but the prompt is evaluated into the cache. (default: -1, -1 = infinity).
`n_predict`: Set the maximum number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. When 0, no tokens will be generated but the prompt is evaluated into the cache. (default: -1, -1 = infinity).

`n_keep`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context.
By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt.
`n_keep`: Specify the number of tokens from the prompt to retain when the context size is exceeded and tokens need to be discarded.
By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the prompt.

`stream`: It allows receiving each predicted token in real-time instead of waiting for the completion to finish. To enable this, set to `true`.

`prompt`: Provide a prompt as a string, or as an array of strings and numbers representing tokens. Internally, the prompt is compared, and it detects if a part has already been evaluated, and the remaining part will be evaluate. If the prompt is a string, or an array with the first element given as a string, a space is inserted in the front like main.cpp does.

`stop`: Specify a JSON array of stopping strings.
These words will not be included in the completion, so make sure to add them to the prompt for the next iteration (default: []).

Expand Down Expand Up @@ -158,6 +158,36 @@ node index.js

`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)

*Result JSON:*

Note: When using streaming mode (`stream`) only `content` and `stop` will be returned until end of completion.

`content`: Completion result as a string (excluding `stopping_word` if any). In case of streaming mode, will contain the next token as a string.

`stop`: Boolean for use with `stream` to check whether the generation has stopped (Note: This is not related to stopping words array `stop` from input options)

`generation_settings`: The provided options above excluding `prompt` but including `n_ctx`, `model`

`model`: The path to the model loaded with `-m`

`prompt`: The provided `prompt`

`stopped_eos`: Indicating whether the completion has stopped because it encountered the EOS token

`stopped_limit`: Indicating whether the completion stopped because `n_predict` tokens were generated before stop words or EOS was encountered

`stopped_word`: Indicating whether the completion stopped due to encountering a stopping word from `stop` JSON array provided

`stopping_word`: The stopping word encountered which stopped the generation (or "" if not stopped due to a stopping word)

`timings`: Hash of timing information about the completion such as the number of tokens `predicted_per_second`

`tokens_cached`: Number of tokens from the prompt which could be re-used from previous completion (`n_past`)

`tokens_evaluated`: Number of tokens evaluated in total from the prompt

`truncated`: Boolean indicating if the context size was exceeded during generation, i.e. the number of tokens provided in the prompt (`tokens_evaluated`) plus tokens generated (`tokens predicted`) exceeded the context size (`n_ctx`)

- **POST** `/tokenize`: Tokenize a given text.

*Options:*
Expand Down