Skip to content

Commit a274599

Browse files
authored
fine tune contents (#2783)
1 parent ce4e8d2 commit a274599

File tree

5 files changed

+8
-8
lines changed

5 files changed

+8
-8
lines changed

llm/llama3/xpu/_sources/index.md.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Intel® Extension for PyTorch* Large Language Model (LLM) Feature Get Started For Llama 3 models
22

3-
Intel® Extension for PyTorch* provides dedicated optimization for running Llama 3 models on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, including weight-only quantization (WOQ), Rotary Position Embedding fusion, etc. You are welcomed to have a try with these optimizations on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics.
3+
Intel® Extension for PyTorch* provides dedicated optimization for running Llama 3 models on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, including weight-only quantization (WOQ), Rotary Position Embedding fusion, etc. You are welcomed to have a try with these optimizations on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics. This document shows how to run Llama 3 with a preview version of Intel® Extension for PyTorch*.
44

55
# 1. Environment Setup
66

@@ -126,4 +126,4 @@ python run_generation_gpu_woq_for_llama.py --model ${PATH/TO/MODEL} --accuracy -
126126
```
127127

128128
## Miscellaneous Tips
129-
Intel® Extension for PyTorch* also provides dedicated optimization for many other Large Language Models (LLM), which covers a set of data types for supporting various scenarios. For more details, please check [Large Language Models (LLM) Optimizations Overview](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/llm.html).
129+
Intel® Extension for PyTorch* also provides dedicated optimization for many other Large Language Models (LLM), which covers a set of data types for supporting various scenarios. For more details, please check [Large Language Models (LLM) Optimizations Overview](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/llm.html). To replicate Llama 3 performance numbers on Intel ARC A770, please take advantage of [IPEX-LLM](https://github.com/intel-analytics/ipex-llm).

llm/llama3/xpu/genindex.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ <h1 id="index">Index</h1>
9595
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
9696
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
9797
provided by <a href="https://readthedocs.org">Read the Docs</a>.
98-
<jinja2.runtime.BlockReference object at 0x7f1198cc90f0>
98+
<jinja2.runtime.BlockReference object at 0x7f1deff8ce80>
9999
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
100100

101101

llm/llama3/xpu/index.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@
9595

9696
<section id="intel-extension-for-pytorch-large-language-model-llm-feature-get-started-for-llama-3-models">
9797
<h1>Intel® Extension for PyTorch* Large Language Model (LLM) Feature Get Started For Llama 3 models<a class="headerlink" href="#intel-extension-for-pytorch-large-language-model-llm-feature-get-started-for-llama-3-models" title="Link to this heading"></a></h1>
98-
<p>Intel® Extension for PyTorch* provides dedicated optimization for running Llama 3 models on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, including weight-only quantization (WOQ), Rotary Position Embedding fusion, etc. You are welcomed to have a try with these optimizations on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics.</p>
98+
<p>Intel® Extension for PyTorch* provides dedicated optimization for running Llama 3 models on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, including weight-only quantization (WOQ), Rotary Position Embedding fusion, etc. You are welcomed to have a try with these optimizations on Intel® Core™ Ultra Processors with Intel® Arc™ Graphics. This document shows how to run Llama 3 with a preview version of Intel® Extension for PyTorch*.</p>
9999
</section>
100100
<section id="environment-setup">
101101
<h1>1. Environment Setup<a class="headerlink" href="#environment-setup" title="Link to this heading"></a></h1>
@@ -246,7 +246,7 @@ <h3>2.1.3 Validate Llama 3 WOQ INT4 Accuracy on Windows 11 Home<a class="headerl
246246
</section>
247247
<section id="miscellaneous-tips">
248248
<h2>Miscellaneous Tips<a class="headerlink" href="#miscellaneous-tips" title="Link to this heading"></a></h2>
249-
<p>Intel® Extension for PyTorch* also provides dedicated optimization for many other Large Language Models (LLM), which covers a set of data types for supporting various scenarios. For more details, please check <a class="reference external" href="https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/llm.html">Large Language Models (LLM) Optimizations Overview</a>.</p>
249+
<p>Intel® Extension for PyTorch* also provides dedicated optimization for many other Large Language Models (LLM), which covers a set of data types for supporting various scenarios. For more details, please check <a class="reference external" href="https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/llm.html">Large Language Models (LLM) Optimizations Overview</a>. To replicate Llama 3 performance numbers on Intel ARC A770, please take advantage of <a class="reference external" href="https://github.com/intel-analytics/ipex-llm">IPEX-LLM</a>.</p>
250250
</section>
251251
</section>
252252

@@ -264,7 +264,7 @@ <h2>Miscellaneous Tips<a class="headerlink" href="#miscellaneous-tips" title="Li
264264
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
265265
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
266266
provided by <a href="https://readthedocs.org">Read the Docs</a>.
267-
<jinja2.runtime.BlockReference object at 0x7f1198c5a170>
267+
<jinja2.runtime.BlockReference object at 0x7f1deffaf130>
268268
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
269269

270270

llm/llama3/xpu/search.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@
103103
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
104104
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
105105
provided by <a href="https://readthedocs.org">Read the Docs</a>.
106-
<jinja2.runtime.BlockReference object at 0x7f1198c59c90>
106+
<jinja2.runtime.BlockReference object at 0x7f1deff4c610>
107107
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
108108

109109

llm/llama3/xpu/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)