[LLM] Add Arc demo gif to readme and readthedocs (#8958)
* Add arc demo in main readme * Small style fix * Realize using table * Update based on comments * Small update * Try to solve with height problem * Small fix * Update demo for inner llm readme * Update demo video for readthedocs * Small fix * Update based on comments
This commit is contained in:
parent
448a9e813a
commit
cb534ed5c4
4 changed files with 84 additions and 18 deletions
32
README.md
32
README.md
|
|
@ -17,12 +17,34 @@
|
||||||
- Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, QWen, Baichuan, MOSS,* and more; see the complete list [here](python/llm/README.md#verified-models).
|
- Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, QWen, Baichuan, MOSS,* and more; see the complete list [here](python/llm/README.md#verified-models).
|
||||||
|
|
||||||
### `bigdl-llm` Demos
|
### `bigdl-llm` Demos
|
||||||
See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15.5b` models on a 12th Gen Intel Core CPU below.
|
See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` models on 12th Gen Intel Core CPU and Intel Arc GPU below.
|
||||||
|
|
||||||
<p align="center">
|
<table width="100%">
|
||||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" width='30%'></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif" width='30%' ></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif" width='30%' ></a>
|
<tr>
|
||||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-models3.png" width='76%'>
|
<td align="center" colspan="2">12th Gen Intel Core CPU</td>
|
||||||
</p>
|
<td align="center" colspan="2">Intel Arc GPU</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" ></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
### `bigdl-llm` quickstart
|
### `bigdl-llm` quickstart
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,9 +3,9 @@ BigDL-LLM
|
||||||
|
|
||||||
.. raw:: html
|
.. raw:: html
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
<strong>BigDL-LLM</strong> is a library for running <strong>LLM</strong> (large language model) on your Intel <strong>laptop</strong> or <strong>GPU</strong> using INT4 with very low latency <sup><a href="#footnote-perf" id="ref-perf">[1]</a></sup> (for any <strong>PyTorch</strong> model).
|
<a href="https://github.com/intel-analytics/BigDL/tree/main/python/llm"><code><span>bigdl-llm</span></code></a> is a library for running <strong>LLM</strong> (large language model) on Intel <strong>XPU</strong> (from <em>Laptop</em> to <em>GPU</em> to <em>Cloud</em>) using <strong>INT4</strong> with very low latency <sup><a href="#footnote-perf" id="ref-perf">[1]</a></sup> (for any <strong>PyTorch</strong> model).
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
-------
|
-------
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -33,14 +33,36 @@ Latest update
|
||||||
``bigdl-llm`` demos
|
``bigdl-llm`` demos
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
See the **optimized performance** of ``chatglm2-6b``, ``llama-2-13b-chat``, and ``starcoder-15.5b`` models on a 12th Gen Intel Core CPU below.
|
See the **optimized performance** of ``chatglm2-6b`` and ``llama-2-13b-chat`` models on 12th Gen Intel Core CPU and Intel Arc GPU below.
|
||||||
|
|
||||||
.. raw:: html
|
.. raw:: html
|
||||||
|
|
||||||
<p align="center">
|
<table width="100%">
|
||||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" width='30%'></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif" width='30%' ></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif" width='30%' ></a>
|
<tr>
|
||||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-models3.png" width='76%'>
|
<td align="center" colspan="2">12th Gen Intel Core CPU</td>
|
||||||
</p>
|
<td align="center" colspan="2">Intel Arc GPU</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" ></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
============================================
|
============================================
|
||||||
``bigdl-llm`` quickstart
|
``bigdl-llm`` quickstart
|
||||||
|
|
|
||||||
|
|
@ -8,12 +8,34 @@
|
||||||
- `bigdl-llm` now supports Intel Arc or Flex GPU; see the the latest GPU examples [here](example/gpu).
|
- `bigdl-llm` now supports Intel Arc or Flex GPU; see the the latest GPU examples [here](example/gpu).
|
||||||
|
|
||||||
### Demos
|
### Demos
|
||||||
See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15.5b` models on a 12th Gen Intel Core CPU below.
|
See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` models on 12th Gen Intel Core CPU and Intel Arc GPU below.
|
||||||
|
|
||||||
<p align="center">
|
<table width="100%">
|
||||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" width='33%'></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif" width='33%' ></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif" width='33%' ></a>
|
<tr>
|
||||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-models3.png" width='85%'>
|
<td align="center" colspan="2">12th Gen Intel Core CPU</td>
|
||||||
</p>
|
<td align="center" colspan="2">Intel Arc GPU</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" ></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama2-13b-arc.gif"></a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
<td align="center" width="25%"><code>chatglm2-6b</code></td>
|
||||||
|
<td align="center" width="25%"><code>llama-2-13b-chat</code></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
|
||||||
### Verified models
|
### Verified models
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue