Skip to content

Commit 1112110

Browse files
authored
Update pipelines.md api docs (#1256)
1 parent 29e1679 commit 1112110

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

docs/source/pipelines.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ The `pipeline()` function is a great way to quickly use a pretrained model for i
5858

5959
<!-- TODO: Replace 'Xenova/whisper-small.en' with 'openai/whisper-small.en' -->
6060
```javascript
61-
// Allocate a pipeline for Automatic Speech Recognition
61+
// Create a pipeline for Automatic Speech Recognition
6262
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small.en');
6363

6464
// Transcribe an audio file, loaded from a URL.
@@ -71,19 +71,20 @@ const result = await transcriber('https://huggingface.co/datasets/Narsil/asr_dum
7171
### Loading
7272

7373
We offer a variety of options to control how models are loaded from the Hugging Face Hub (or locally).
74-
By default, the *quantized* version of the model is used, which is smaller and faster, but usually less accurate.
75-
To override this behaviour (i.e., use the unquantized model), you can use a custom `PretrainedOptions` object
76-
as the third parameter to the `pipeline` function:
74+
By default, when running in-browser, a *quantized* version of the model is used, which is smaller and faster,
75+
but usually less accurate. To override this behaviour (i.e., use the unquantized model), you can use a custom
76+
`PretrainedOptions` object as the third parameter to the `pipeline` function:
7777

7878
```javascript
79-
// Allocation a pipeline for feature extraction, using the unquantized model
79+
// Create a pipeline for feature extraction, using the full-precision model (fp32)
8080
const pipe = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {
81-
quantized: false,
81+
dtype: "fp32",
8282
});
8383
```
84+
Check out the section on [quantization](./guides/dtypes) to learn more.
8485

8586
You can also specify which revision of the model to use, by passing a `revision` parameter.
86-
Since the Hugging Face Hub uses a git-based versioning system, you can use any valid git revision specifier (e.g., branch name or commit hash)
87+
Since the Hugging Face Hub uses a git-based versioning system, you can use any valid git revision specifier (e.g., branch name or commit hash).
8788

8889
```javascript
8990
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en', {
@@ -99,7 +100,7 @@ Many pipelines have additional options that you can specify. For example, when u
99100

100101
<!-- TODO: Replace 'Xenova/nllb-200-distilled-600M' with 'facebook/nllb-200-distilled-600M' -->
101102
```javascript
102-
// Allocation a pipeline for translation
103+
// Create a pipeline for translation
103104
const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
104105

105106
// Translate from English to Greek
@@ -124,7 +125,7 @@ For example, to generate a poem using `LaMini-Flan-T5-783M`, you can do:
124125
<!-- TODO: Replace 'Xenova/LaMini-Flan-T5-783M' with 'MBZUAI/LaMini-Flan-T5-783M' -->
125126

126127
```javascript
127-
// Allocate a pipeline for text2text-generation
128+
// Create a pipeline for text2text-generation
128129
const poet = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
129130
const result = await poet('Write me a love poem about cheese.', {
130131
max_new_tokens: 200,

0 commit comments

Comments
 (0)