Add NPU demo gif to main readme (#12503)
* Add NPU demo gif to main readme * Small fix * Update based on comments * Test on style fix
This commit is contained in:
parent
f56a111aa2
commit
727f29968c
1 changed files with 10 additions and 10 deletions
20
README.md
20
README.md
|
|
@ -78,20 +78,20 @@ See demos of running local LLMs *on Intel Iris iGPU, Intel Core Ultra iGPU, sing
|
||||||
|
|
||||||
<table width="100%">
|
<table width="100%">
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" colspan="1"><strong>Intel Iris iGPU</strong></td>
|
<td align="center" colspan="1"><strong>Intel Core Ultra (Series 1) iGPU</strong></td>
|
||||||
<td align="center" colspan="1"><strong>Intel Core Ultra iGPU</strong></td>
|
<td align="center" colspan="1"><strong>Intel Core Ultra (Series 2) NPU</strong></td>
|
||||||
<td align="center" colspan="1"><strong>Intel Arc dGPU</strong></td>
|
<td align="center" colspan="1"><strong>Intel Arc dGPU</strong></td>
|
||||||
<td align="center" colspan="1"><strong>2-Card Intel Arc dGPUs</strong></td>
|
<td align="center" colspan="1"><strong>2-Card Intel Arc dGPUs</strong></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>
|
<td>
|
||||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/iris_phi3-3.8B_q4_0_llamacpp_long.gif" target="_blank">
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/mtl_mistral-7B_q4_k_m_ollama.gif" target="_blank">
|
||||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/iris_phi3-3.8B_q4_0_llamacpp_long.gif" width=100%; />
|
<img src="https://llm-assets.readthedocs.io/en/latest/_images/mtl_mistral-7B_q4_k_m_ollama.gif" width=100%; />
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
<a href="https://llm-assets.readthedocs.io/en/latest/_images/mtl_mistral-7B_q4_k_m_ollama.gif" target="_blank">
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/npu_llama3.2-3B.gif" target="_blank">
|
||||||
<img src="https://llm-assets.readthedocs.io/en/latest/_images/mtl_mistral-7B_q4_k_m_ollama.gif" width=100%; />
|
<img src="https://llm-assets.readthedocs.io/en/latest/_images/npu_llama3.2-3B.gif" width=100%; />
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
|
@ -107,16 +107,16 @@ See demos of running local LLMs *on Intel Iris iGPU, Intel Core Ultra iGPU, sing
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" width="25%">
|
<td align="center" width="25%">
|
||||||
<a href="docs/mddocs/Quickstart/llama_cpp_quickstart.md">llama.cpp (Phi-3-mini Q4_0)</a>
|
<a href="docs/mddocs/Quickstart/ollama_quickstart.md">Ollama <br> (Mistral-7B Q4_K) </a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" width="25%">
|
<td align="center" width="25%">
|
||||||
<a href="docs/mddocs/Quickstart/ollama_quickstart.md">Ollama (Mistral-7B Q4_K) </a>
|
<a href="docs/mddocs/Quickstart/npu_quickstart.md">HuggingFace <br> (Llama3.2-3B SYM_INT4)</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" width="25%">
|
<td align="center" width="25%">
|
||||||
<a href="docs/mddocs/Quickstart/webui_quickstart.md">TextGeneration-WebUI (Llama3-8B FP8) </a>
|
<a href="docs/mddocs/Quickstart/webui_quickstart.md">TextGeneration-WebUI <br> (Llama3-8B FP8) </a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" width="25%">
|
<td align="center" width="25%">
|
||||||
<a href="docs/mddocs/Quickstart/fastchat_quickstart.md">FastChat (QWen1.5-32B FP6)</a>
|
<a href="docs/mddocs/Quickstart/fastchat_quickstart.md">FastChat <br> (QWen1.5-32B FP6)</a>
|
||||||
</td> </tr>
|
</td> </tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue