Commit graph

70 commits

Author SHA1 Message Date
SONG Ge
e2d58f733e
Update ollama v0.5.1 document (#12699)
* Update ollama document version and known issue
2025-01-10 18:04:49 +08:00
joan726
584c1c5373
Update B580 CN doc (#12695) 2025-01-10 11:20:47 +08:00
Jason Dai
f9b29a4f56
Update B580 doc (#12691) 2025-01-10 08:59:35 +08:00
joan726
66d4385cc9
Update B580 CN Doc (#12686) 2025-01-09 19:10:57 +08:00
Jason Dai
aa9e70a347
Update B580 Doc (#12678) 2025-01-08 22:36:48 +08:00
Shaojun Liu
2c23ce2553
Create a BattleMage QuickStart (#12663)
* Create bmg_quickstart.md

* Update bmg_quickstart.md

* Clarify IPEX-LLM package installation based on use case

* Update bmg_quickstart.md

* Update bmg_quickstart.md
2025-01-08 14:58:37 +08:00
Yuwen Hu
381d448ee2
[NPU] Example & Quickstart updates (#12650)
* Remove model with optimize_model=False in NPU verified models tables, and remove related example

* Remove experimental in run optimized model section title

* Unify model table order & example cmd

* Move embedding example to separate folder & update quickstart example link

* Add Quickstart reference in main NPU readme

* Small fix

* Small fix

* Move save/load examples under NPU/HF-Transformers-AutoModels

* Add low-bit and polish arguments for LLM Python examples

* Small fix

* Add low-bit and polish arguments for Multi-Model  examples

* Polish argument for Embedding models

* Polish argument for LLM CPP examples

* Add low-bit and polish argument for Save-Load examples

* Add accuracy tuning tips for examples

* Update NPU qucikstart accuracy tuning with low-bit optimizations

* Add save/load section to qucikstart

* Update CPP example sample output to EN

* Add installation regarding cmake for CPP examples

* Small fix

* Small fix

* Small fix

* Small fix

* Small fix

* Small fix

* Unify max prompt length to 512

* Change recommended low-bit for Qwen2.5-3B-Instruct to asym_int4

* Update based on comments

* Small fix
2025-01-07 13:52:41 +08:00
SONG Ge
550fa01649
[Doc] Update ipex-llm ollama troubleshooting for v0.4.6 (#12642)
* update ollama v0.4.6 troubleshooting

* update chinese ollama-doc
2025-01-02 17:28:54 +08:00
binbin Deng
796ee571a5
[NPU doc] Update verified platforms (#12621) 2024-12-26 17:39:13 +08:00
Mingqi Hu
0477fe6480
[docs] Update doc for latest open webui: 0.4.8 (#12591)
* Update open webui doc

* Resolve comments
2024-12-26 09:18:20 +08:00
binbin Deng
4e7e988f70
[NPU] Fix MTL and ARL support (#12580) 2024-12-19 16:55:30 +08:00
SONG Ge
28e81fda8e
Replace runner doc in ollama quickstart (#12575) 2024-12-18 19:05:28 +08:00
SONG Ge
f7a2bd21cf
Update ollama and llama.cpp readme (#12574) 2024-12-18 17:33:20 +08:00
binbin Deng
694d14b2b4
[NPU doc] Add ARL runtime configuration (#12562) 2024-12-17 16:08:42 +08:00
Yuwen Hu
d127a8654c
Small typo fixes (#12558) 2024-12-17 13:54:13 +08:00
binbin Deng
680ea7e4a8
[NPU doc] Update configuration for different platforms (#12554) 2024-12-17 10:15:09 +08:00
binbin Deng
caf15cc5ef
[NPU] Add IPEX_LLM_NPU_MTL to enable support on mtl (#12543) 2024-12-13 17:01:13 +08:00
SONG Ge
5402fc65c8
[Ollama] Update ipex-llm ollama readme to v0.4.6 (#12542)
* Update ipex-llm ollama readme to v0.4.6
2024-12-13 16:26:12 +08:00
binbin Deng
6fc27da9c1
[NPU] Update glm-edge support in docs (#12529) 2024-12-12 11:14:09 +08:00
Jinhe
5e1416c9aa
fix readme for npu cpp examples and llama.cpp (#12505)
* fix cpp readme

* fix cpp readme

* fix cpp readme
2024-12-05 12:32:42 +08:00
joan726
ae9c2154f4
Added cross-links (#12494)
* Update install_linux_gpu.zh-CN.md

Add the link for guide of windows installation.

* Update install_windows_gpu.zh-CN.md

Add the link for guide of linux installation.

* Update install_windows_gpu.md

Add the link for guide of Linux installation.

* Update install_linux_gpu.md

Add the link for guide of Windows installation.

* Update install_linux_gpu.md

Modify based on comments.

* Update install_windows_gpu.md

Modify based on comments
2024-12-04 16:53:13 +08:00
Yuwen Hu
aee9acb303
Add NPU QuickStart & update example links (#12470)
* Add initial NPU quickstart (c++ part unfinished)

* Small update

* Update based on comments

* Update main readme

* Remove LLaMA description

* Small fix

* Small fix

* Remove subsection link in main README

* Small fix

* Update based on comments

* Small fix

* TOC update and other small fixes

* Update for Chinese main readme

* Update based on comments and other small fixes

* Change order
2024-12-02 17:03:10 +08:00
Yuwen Hu
a2272b70d3
Small fix in llama.cpp troubleshooting guide (#12457) 2024-11-27 19:22:11 +08:00
Chu,Youcheng
acd77d9e87
Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445)
* fix: remove BIGDL_LLM_XMX_DISABLED in mddocs

* fix: remove set SYCL_CACHE_PERSISTENT=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: remove set BIGDL_LLM_XMX_DISABLED=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: textual adjustment

* fix: textual adjustment

* fix: textual adjustment
2024-11-27 11:16:36 +08:00
joan726
a9cb70a71c
Add install_windows_gpu.zh-CN.md and install_linux_gpu.zh-CN.md (#12409)
* Add install_linux_gpu.zh-CN.md

* Add install_windows_gpu.zh-CN.md

* Update llama_cpp_quickstart.zh-CN.md

Related links updated to zh-CN version.

* Update install_linux_gpu.zh-CN.md

Added link to English version.

* Update install_windows_gpu.zh-CN.md

Add the link to English version.

* Update install_windows_gpu.md

Add the link to CN version.

* Update install_linux_gpu.md

Add the link to CN version.

* Update README.zh-CN.md

Modified the related link to zh-CN version.
2024-11-19 14:39:53 +08:00
Yuwen Hu
d1cde7fac4
Tiny doc fix (#12405) 2024-11-15 10:28:38 +08:00
Xin Qiu
7ef7696956
update linux installation doc (#12365)
* update linux doc

* update
2024-11-08 09:44:58 +08:00
Xin Qiu
520af4e9b5
Update install_linux_gpu.md (#12353) 2024-11-07 16:08:01 +08:00
Jinhe
71ea539351
Add troubleshootings for ollama and llama.cpp (#12358)
* add ollama troubleshoot en

* zh ollama troubleshoot

* llamacpp trouble shoot

* llamacpp trouble shoot

* fix

* save gpu memory
2024-11-07 15:49:20 +08:00
Jin, Qiao
3df6195cb0
Fix application quickstart (#12305)
* fix graphrag quickstart

* fix axolotl quickstart

* fix ragflow quickstart

* fix ragflow quickstart

* fix graphrag toc

* fix comments

* fix comment

* fix comments
2024-10-31 16:57:35 +08:00
joan726
0bbc04b5ec
Add ollama_quickstart.zh-CN.md (#12284)
* Add ollama_quickstart.zh-CN.md

Add ollama_quickstart.zh-CN.md

* Update ollama_quickstart.zh-CN.md

Add Chinese and English switching

* Update ollama_quickstart.md

Add Chinese and English switching

* Update README.zh-CN.md

Modify the related link to ollama_quickstart.zh-CN.md

* Update ollama_quickstart.zh-CN.md

Modified based on comments.

* Update ollama_quickstart.zh-CN.md

Modified based on comments
2024-10-29 15:12:44 +08:00
Yuwen Hu
42a528ded9
Small update to MTL iGPU Linux Prerequisites installation guide (#12281)
* Small update MTL iGPU Linux Prerequisites installation guide

* Small fix
2024-10-28 14:12:07 +08:00
Yuwen Hu
16074ae2a4
Update Linux prerequisites installation guide for MTL iGPU (#12263)
* Update Linux prerequisites installation guide for MTL iGPU

* Further link update

* Small fixes

* Small fix

* Update based on comments

* Small fix

* Make oneAPI installation a shared section for both MTL iGPU and other GPU

* Small fix

* Small fix

* Clarify description
2024-10-28 09:27:14 +08:00
joan726
e0a95eb2d6
Add llama_cpp_quickstart.zh-CN.md (#12221) 2024-10-24 16:08:31 +08:00
Yuwen Hu
a768d71581
Small fix to LNL installation guide (#12192) 2024-10-14 12:03:03 +08:00
Yuwen Hu
ddcdf47539
Support Windows ARL release (#12183)
* Support release for ARL

* Small fix

* Small fix to doc

* Temp for test

* Remove temp commit for test
2024-10-11 18:30:52 +08:00
Yuwen Hu
ac44e98b7d
Update Windows guide regarding LNL support (#12178)
* Update windows guide regarding LNL support

* Update based on comments
2024-10-11 09:20:08 +08:00
Guancheng Fu
0ef7e1d101
fix vllm docs (#12176) 2024-10-10 15:44:36 +08:00
Ch1y0q
9b75806d14
Update Windows GPU quickstart regarding demo (#12124)
* use Qwen2-1.5B-Instruct in demo

* update

* add reference link

* update

* update
2024-09-29 18:08:49 +08:00
Ruonan Wang
a767438546
fix typo (#12076)
* fix typo

* fix
2024-09-13 11:44:42 +08:00
Ruonan Wang
3f0b24ae2b
update cpp quickstart (#12075)
* update cpp quickstart

* fix style
2024-09-13 11:35:32 +08:00
Ruonan Wang
48d9092b5a
upgrade OneAPI version for cpp Windows (#12063)
* update version

* update quickstart
2024-09-12 11:12:12 +08:00
Shaojun Liu
e5581e6ded
Select the Appropriate APT Repository Based on CPU Type (#12023) 2024-09-05 17:06:07 +08:00
Yuwen Hu
643458d8f0
Update GraphRAG QuickStart (#11995)
* Update GraphRAG QuickStart

* Further updates

* Small fixes

* Small fix
2024-09-03 15:52:08 +08:00
Jinhe
e895e1b4c5
modification on llamacpp readme after Ipex-llm latest update (#11971)
* update on readme after ipex-llm update

* update on readme after ipex-llm update

* rebase & delete redundancy

* revise

* add numbers for troubleshooting
2024-08-30 11:36:45 +08:00
Ch1y0q
77b04efcc5
add notes for SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS (#11936)
* add notes for `SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS`

* also update other quickstart
2024-08-30 09:26:47 +08:00
Jinhe
6fc9340d53
restore ollama webui quickstart (#11955) 2024-08-29 17:53:19 +08:00
Jinhe
ec67ee7177
added accelerate version specification in open webui quickstart(#11948) 2024-08-28 15:02:39 +08:00
Ruonan Wang
460bc96d32
update version of llama.cpp / ollama (#11930)
* update version

* fix version
2024-08-27 21:21:44 +08:00
Ch1y0q
5a8fc1baa2
update troubleshooting for llama.cpp and ollama (#11890)
* update troubleshooting for llama.cpp and ollama

* update

* update
2024-08-26 20:55:23 +08:00