Commit graph

839 commits

Author SHA1 Message Date
Shaojun Liu
f7b5a093a7
Merge CPU & XPU Dockerfiles with Serving Images and Refactor (#12815)
* Update Dockerfile

* Update Dockerfile

* Ensure scripts are executable

* Update Dockerfile

* Update Dockerfile

* Update Dockerfile

* Update Dockerfile

* Update Dockerfile

* Update Dockerfile

* update

* Update Dockerfile

* remove inference-cpu and inference-xpu

* update README
2025-02-17 14:23:22 +08:00
joan726
59e8e1e91e
Added ollama_portablze_zip_quickstart.zh-CN.md (#12822) 2025-02-14 18:54:12 +08:00
Jason Dai
a09552e59a
Update ollama quickstart (#12823) 2025-02-14 09:55:48 +08:00
Yuwen Hu
f67986021c
Update download link for Ollama portable zip QuickStart (#12821)
* Update download link for Ollama portable zip quickstart

* Update based on comments
2025-02-13 17:48:02 +08:00
Jason Dai
16e63cbc18
Update readme (#12820) 2025-02-13 14:26:04 +08:00
Yuwen Hu
68414afcb9
Add initial QuickStart for Ollama portable zip (#12817)
* Add initial quickstart for Ollama portable zip

* Small fix

* Fixed based on comments

* Small fix

* Add demo image for run ollama

* Update download link
2025-02-13 13:18:14 +08:00
binbin Deng
d093b75aa0
[NPU] Update driver installation in QuickStart (#12807) 2025-02-11 15:49:21 +08:00
binbin Deng
6ff7faa781
[NPU] Update deepseek support in python examples and quickstart (#12786) 2025-02-07 11:25:16 +08:00
Shaojun Liu
ee809e71df
add troubleshooting section (#12755) 2025-01-26 11:03:58 +08:00
Shaojun Liu
53aae24616
Add note about enabling Resizable BAR in BIOS for GPU setup (#12715) 2025-01-16 16:22:35 +08:00
binbin Deng
36bf3d8e29
[NPU doc] Update ARL product in QuickStart (#12708) 2025-01-15 15:57:06 +08:00
SONG Ge
e2d58f733e
Update ollama v0.5.1 document (#12699)
* Update ollama document version and known issue
2025-01-10 18:04:49 +08:00
joan726
584c1c5373
Update B580 CN doc (#12695) 2025-01-10 11:20:47 +08:00
Jason Dai
cbb8e2a2d5
Update documents (#12693) 2025-01-10 10:47:11 +08:00
Jason Dai
f9b29a4f56
Update B580 doc (#12691) 2025-01-10 08:59:35 +08:00
joan726
66d4385cc9
Update B580 CN Doc (#12686) 2025-01-09 19:10:57 +08:00
Jason Dai
aa9e70a347
Update B580 Doc (#12678) 2025-01-08 22:36:48 +08:00
Shaojun Liu
2c23ce2553
Create a BattleMage QuickStart (#12663)
* Create bmg_quickstart.md

* Update bmg_quickstart.md

* Clarify IPEX-LLM package installation based on use case

* Update bmg_quickstart.md

* Update bmg_quickstart.md
2025-01-08 14:58:37 +08:00
logicat
0534d7254f
Update docker_cpp_xpu_quickstart.md (#12667) 2025-01-08 09:56:56 +08:00
Yuwen Hu
381d448ee2
[NPU] Example & Quickstart updates (#12650)
* Remove model with optimize_model=False in NPU verified models tables, and remove related example

* Remove experimental in run optimized model section title

* Unify model table order & example cmd

* Move embedding example to separate folder & update quickstart example link

* Add Quickstart reference in main NPU readme

* Small fix

* Small fix

* Move save/load examples under NPU/HF-Transformers-AutoModels

* Add low-bit and polish arguments for LLM Python examples

* Small fix

* Add low-bit and polish arguments for Multi-Model  examples

* Polish argument for Embedding models

* Polish argument for LLM CPP examples

* Add low-bit and polish argument for Save-Load examples

* Add accuracy tuning tips for examples

* Update NPU qucikstart accuracy tuning with low-bit optimizations

* Add save/load section to qucikstart

* Update CPP example sample output to EN

* Add installation regarding cmake for CPP examples

* Small fix

* Small fix

* Small fix

* Small fix

* Small fix

* Small fix

* Unify max prompt length to 512

* Change recommended low-bit for Qwen2.5-3B-Instruct to asym_int4

* Update based on comments

* Small fix
2025-01-07 13:52:41 +08:00
SONG Ge
550fa01649
[Doc] Update ipex-llm ollama troubleshooting for v0.4.6 (#12642)
* update ollama v0.4.6 troubleshooting

* update chinese ollama-doc
2025-01-02 17:28:54 +08:00
Yishuo Wang
2d08155513
remove bmm, which is only required in ipex 2.0 (#12630) 2024-12-27 17:28:57 +08:00
binbin Deng
796ee571a5
[NPU doc] Update verified platforms (#12621) 2024-12-26 17:39:13 +08:00
Mingqi Hu
0477fe6480
[docs] Update doc for latest open webui: 0.4.8 (#12591)
* Update open webui doc

* Resolve comments
2024-12-26 09:18:20 +08:00
binbin Deng
4e7e988f70
[NPU] Fix MTL and ARL support (#12580) 2024-12-19 16:55:30 +08:00
SONG Ge
28e81fda8e
Replace runner doc in ollama quickstart (#12575) 2024-12-18 19:05:28 +08:00
SONG Ge
f7a2bd21cf
Update ollama and llama.cpp readme (#12574) 2024-12-18 17:33:20 +08:00
binbin Deng
694d14b2b4
[NPU doc] Add ARL runtime configuration (#12562) 2024-12-17 16:08:42 +08:00
Yuwen Hu
d127a8654c
Small typo fixes (#12558) 2024-12-17 13:54:13 +08:00
binbin Deng
680ea7e4a8
[NPU doc] Update configuration for different platforms (#12554) 2024-12-17 10:15:09 +08:00
binbin Deng
caf15cc5ef
[NPU] Add IPEX_LLM_NPU_MTL to enable support on mtl (#12543) 2024-12-13 17:01:13 +08:00
SONG Ge
5402fc65c8
[Ollama] Update ipex-llm ollama readme to v0.4.6 (#12542)
* Update ipex-llm ollama readme to v0.4.6
2024-12-13 16:26:12 +08:00
Yuwen Hu
b747f3f6b8
Small fix to GPU installation guide (#12536) 2024-12-13 10:02:47 +08:00
binbin Deng
6fc27da9c1
[NPU] Update glm-edge support in docs (#12529) 2024-12-12 11:14:09 +08:00
Jinhe
5e1416c9aa
fix readme for npu cpp examples and llama.cpp (#12505)
* fix cpp readme

* fix cpp readme

* fix cpp readme
2024-12-05 12:32:42 +08:00
joan726
ae9c2154f4
Added cross-links (#12494)
* Update install_linux_gpu.zh-CN.md

Add the link for guide of windows installation.

* Update install_windows_gpu.zh-CN.md

Add the link for guide of linux installation.

* Update install_windows_gpu.md

Add the link for guide of Linux installation.

* Update install_linux_gpu.md

Add the link for guide of Windows installation.

* Update install_linux_gpu.md

Modify based on comments.

* Update install_windows_gpu.md

Modify based on comments
2024-12-04 16:53:13 +08:00
Yuwen Hu
aee9acb303
Add NPU QuickStart & update example links (#12470)
* Add initial NPU quickstart (c++ part unfinished)

* Small update

* Update based on comments

* Update main readme

* Remove LLaMA description

* Small fix

* Small fix

* Remove subsection link in main README

* Small fix

* Update based on comments

* Small fix

* TOC update and other small fixes

* Update for Chinese main readme

* Update based on comments and other small fixes

* Change order
2024-12-02 17:03:10 +08:00
Yuwen Hu
a2272b70d3
Small fix in llama.cpp troubleshooting guide (#12457) 2024-11-27 19:22:11 +08:00
Chu,Youcheng
acd77d9e87
Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445)
* fix: remove BIGDL_LLM_XMX_DISABLED in mddocs

* fix: remove set SYCL_CACHE_PERSISTENT=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: remove set BIGDL_LLM_XMX_DISABLED=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: textual adjustment

* fix: textual adjustment

* fix: textual adjustment
2024-11-27 11:16:36 +08:00
Jun Wang
cb7b08948b
update vllm-docker-quick-start for vllm0.6.2 (#12392)
* update vllm-docker-quick-start for vllm0.6.2

* [UPDATE] rm max-num-seqs parameter in vllm-serving script
2024-11-27 08:47:03 +08:00
joan726
a9cb70a71c
Add install_windows_gpu.zh-CN.md and install_linux_gpu.zh-CN.md (#12409)
* Add install_linux_gpu.zh-CN.md

* Add install_windows_gpu.zh-CN.md

* Update llama_cpp_quickstart.zh-CN.md

Related links updated to zh-CN version.

* Update install_linux_gpu.zh-CN.md

Added link to English version.

* Update install_windows_gpu.zh-CN.md

Add the link to English version.

* Update install_windows_gpu.md

Add the link to CN version.

* Update install_linux_gpu.md

Add the link to CN version.

* Update README.zh-CN.md

Modified the related link to zh-CN version.
2024-11-19 14:39:53 +08:00
Yuwen Hu
d1cde7fac4
Tiny doc fix (#12405) 2024-11-15 10:28:38 +08:00
Xu, Shuo
6726b198fd
Update readme & doc for the vllm upgrade to v0.6.2 (#12399)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-11-14 10:28:15 +08:00
Jun Wang
4376fdee62
Decouple the openwebui and the ollama. in inference-cpp-xpu dockerfile (#12382)
* remove the openwebui in inference-cpp-xpu dockerfile

* update docker_cpp_xpu_quickstart.md

* add sample output in inference-cpp/readme

* remove the openwebui in main readme

* remove the openwebui in main readme
2024-11-12 20:15:23 +08:00
Shaojun Liu
fad15c8ca0
Update fastchat demo script (#12367)
* Update README.md

* Update vllm_docker_quickstart.md
2024-11-08 15:42:17 +08:00
Xin Qiu
7ef7696956
update linux installation doc (#12365)
* update linux doc

* update
2024-11-08 09:44:58 +08:00
Xin Qiu
520af4e9b5
Update install_linux_gpu.md (#12353) 2024-11-07 16:08:01 +08:00
Jinhe
71ea539351
Add troubleshootings for ollama and llama.cpp (#12358)
* add ollama troubleshoot en

* zh ollama troubleshoot

* llamacpp trouble shoot

* llamacpp trouble shoot

* fix

* save gpu memory
2024-11-07 15:49:20 +08:00
Xu, Shuo
ce0c6ae423
Update Readme for FastChat docker demo (#12354)
* update Readme for FastChat docker demo

* update readme

* add 'Serving with FastChat' part in docs

* polish docs

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-11-07 15:22:42 +08:00
Jin, Qiao
3df6195cb0
Fix application quickstart (#12305)
* fix graphrag quickstart

* fix axolotl quickstart

* fix ragflow quickstart

* fix ragflow quickstart

* fix graphrag toc

* fix comments

* fix comment

* fix comments
2024-10-31 16:57:35 +08:00