Commit graph

904 commits

Author SHA1 Message Date
joan726
ae9c2154f4
Added cross-links (#12494)
* Update install_linux_gpu.zh-CN.md

Add the link for guide of windows installation.

* Update install_windows_gpu.zh-CN.md

Add the link for guide of linux installation.

* Update install_windows_gpu.md

Add the link for guide of Linux installation.

* Update install_linux_gpu.md

Add the link for guide of Windows installation.

* Update install_linux_gpu.md

Modify based on comments.

* Update install_windows_gpu.md

Modify based on comments
2024-12-04 16:53:13 +08:00
Yuwen Hu
aee9acb303
Add NPU QuickStart & update example links (#12470)
* Add initial NPU quickstart (c++ part unfinished)

* Small update

* Update based on comments

* Update main readme

* Remove LLaMA description

* Small fix

* Small fix

* Remove subsection link in main README

* Small fix

* Update based on comments

* Small fix

* TOC update and other small fixes

* Update for Chinese main readme

* Update based on comments and other small fixes

* Change order
2024-12-02 17:03:10 +08:00
Yuwen Hu
a2272b70d3
Small fix in llama.cpp troubleshooting guide (#12457) 2024-11-27 19:22:11 +08:00
Chu,Youcheng
acd77d9e87
Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445)
* fix: remove BIGDL_LLM_XMX_DISABLED in mddocs

* fix: remove set SYCL_CACHE_PERSISTENT=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: remove set BIGDL_LLM_XMX_DISABLED=1 in example

* fix: remove BIGDL_LLM_XMX_DISABLED in workflows

* fix: merge igpu and A-series Graphics

* fix: textual adjustment

* fix: textual adjustment

* fix: textual adjustment
2024-11-27 11:16:36 +08:00
Jun Wang
cb7b08948b
update vllm-docker-quick-start for vllm0.6.2 (#12392)
* update vllm-docker-quick-start for vllm0.6.2

* [UPDATE] rm max-num-seqs parameter in vllm-serving script
2024-11-27 08:47:03 +08:00
joan726
a9cb70a71c
Add install_windows_gpu.zh-CN.md and install_linux_gpu.zh-CN.md (#12409)
* Add install_linux_gpu.zh-CN.md

* Add install_windows_gpu.zh-CN.md

* Update llama_cpp_quickstart.zh-CN.md

Related links updated to zh-CN version.

* Update install_linux_gpu.zh-CN.md

Added link to English version.

* Update install_windows_gpu.zh-CN.md

Add the link to English version.

* Update install_windows_gpu.md

Add the link to CN version.

* Update install_linux_gpu.md

Add the link to CN version.

* Update README.zh-CN.md

Modified the related link to zh-CN version.
2024-11-19 14:39:53 +08:00
Yuwen Hu
d1cde7fac4
Tiny doc fix (#12405) 2024-11-15 10:28:38 +08:00
Xu, Shuo
6726b198fd
Update readme & doc for the vllm upgrade to v0.6.2 (#12399)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-11-14 10:28:15 +08:00
Jun Wang
4376fdee62
Decouple the openwebui and the ollama. in inference-cpp-xpu dockerfile (#12382)
* remove the openwebui in inference-cpp-xpu dockerfile

* update docker_cpp_xpu_quickstart.md

* add sample output in inference-cpp/readme

* remove the openwebui in main readme

* remove the openwebui in main readme
2024-11-12 20:15:23 +08:00
Shaojun Liu
fad15c8ca0
Update fastchat demo script (#12367)
* Update README.md

* Update vllm_docker_quickstart.md
2024-11-08 15:42:17 +08:00
Xin Qiu
7ef7696956
update linux installation doc (#12365)
* update linux doc

* update
2024-11-08 09:44:58 +08:00
Xin Qiu
520af4e9b5
Update install_linux_gpu.md (#12353) 2024-11-07 16:08:01 +08:00
Jinhe
71ea539351
Add troubleshootings for ollama and llama.cpp (#12358)
* add ollama troubleshoot en

* zh ollama troubleshoot

* llamacpp trouble shoot

* llamacpp trouble shoot

* fix

* save gpu memory
2024-11-07 15:49:20 +08:00
Xu, Shuo
ce0c6ae423
Update Readme for FastChat docker demo (#12354)
* update Readme for FastChat docker demo

* update readme

* add 'Serving with FastChat' part in docs

* polish docs

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-11-07 15:22:42 +08:00
Jin, Qiao
3df6195cb0
Fix application quickstart (#12305)
* fix graphrag quickstart

* fix axolotl quickstart

* fix ragflow quickstart

* fix ragflow quickstart

* fix graphrag toc

* fix comments

* fix comment

* fix comments
2024-10-31 16:57:35 +08:00
joan726
0bbc04b5ec
Add ollama_quickstart.zh-CN.md (#12284)
* Add ollama_quickstart.zh-CN.md

Add ollama_quickstart.zh-CN.md

* Update ollama_quickstart.zh-CN.md

Add Chinese and English switching

* Update ollama_quickstart.md

Add Chinese and English switching

* Update README.zh-CN.md

Modify the related link to ollama_quickstart.zh-CN.md

* Update ollama_quickstart.zh-CN.md

Modified based on comments.

* Update ollama_quickstart.zh-CN.md

Modified based on comments
2024-10-29 15:12:44 +08:00
Yuwen Hu
42a528ded9
Small update to MTL iGPU Linux Prerequisites installation guide (#12281)
* Small update MTL iGPU Linux Prerequisites installation guide

* Small fix
2024-10-28 14:12:07 +08:00
Yuwen Hu
16074ae2a4
Update Linux prerequisites installation guide for MTL iGPU (#12263)
* Update Linux prerequisites installation guide for MTL iGPU

* Further link update

* Small fixes

* Small fix

* Update based on comments

* Small fix

* Make oneAPI installation a shared section for both MTL iGPU and other GPU

* Small fix

* Small fix

* Clarify description
2024-10-28 09:27:14 +08:00
Yuwen Hu
94c4568988
Update windows installation guide regarding troubleshooting (#12270) 2024-10-25 14:32:38 +08:00
joan726
e0a95eb2d6
Add llama_cpp_quickstart.zh-CN.md (#12221) 2024-10-24 16:08:31 +08:00
Jun Wang
aedc4edfba
[ADD] add open webui + vllm serving (#12246) 2024-10-23 10:13:14 +08:00
Jun Wang
fe3b5cd89b
[Update] mmdocs/dockerguide vllm-quick-start awq,gptq online serving document (#12227)
* [FIX] fix the docker start script error

* [ADD] add awq online serving doc

* [ADD] add gptq online serving doc

* [Fix] small fix
2024-10-18 09:46:59 +08:00
Yuwen Hu
a768d71581
Small fix to LNL installation guide (#12192) 2024-10-14 12:03:03 +08:00
Shaojun Liu
49eb20613a
add --blocksize to doc and script (#12187) 2024-10-12 09:17:42 +08:00
Jun Wang
6ffaec66a2
[UPDATE] add prefix caching document into vllm_docker_quickstart.md (#12173)
* [ADD] rewrite new vllm docker quick start

* [ADD] lora adapter doc finished

* [ADD] mulit lora adapter test successfully

* [ADD] add ipex-llm quantization doc

* [Merge] rebase main

* [REMOVE] rm tmp file

* [Merge] rebase main

* [ADD] add prefix caching experiment and result

* [REMOVE] rm cpu offloading chapter
2024-10-11 19:12:22 +08:00
Yuwen Hu
ddcdf47539
Support Windows ARL release (#12183)
* Support release for ARL

* Small fix

* Small fix to doc

* Temp for test

* Remove temp commit for test
2024-10-11 18:30:52 +08:00
Yuwen Hu
ac44e98b7d
Update Windows guide regarding LNL support (#12178)
* Update windows guide regarding LNL support

* Update based on comments
2024-10-11 09:20:08 +08:00
Guancheng Fu
0ef7e1d101
fix vllm docs (#12176) 2024-10-10 15:44:36 +08:00
Jun Wang
412cf8e20c
[UPDATE] update mddocs/DockerGuides/vllm_docker_quickstart.md (#12166)
* [ADD] rewrite new vllm docker quick start

* [ADD] lora adapter doc finished

* [ADD] mulit lora adapter test successfully

* [ADD] add ipex-llm quantization doc

* [UPDATE] update mmdocs vllm_docker_quickstart content

* [REMOVE] rm tmp file

* [UPDATE] tp and pp explaination and readthedoc link change

* [FIX] fix the error description of tp+pp and quantization part

* [FIX] fix the table of verifed model

* [UPDATE] add full low bit para list

* [UPDATE] update the load_in_low_bit params to verifed dtype
2024-10-09 11:19:32 +08:00
Shaojun Liu
e2ef9e938e
Delete deprecated docs/readthedocs directory (#12164) 2024-10-08 14:48:02 +08:00
Ch1y0q
9b75806d14
Update Windows GPU quickstart regarding demo (#12124)
* use Qwen2-1.5B-Instruct in demo

* update

* add reference link

* update

* update
2024-09-29 18:08:49 +08:00
Ruonan Wang
a767438546
fix typo (#12076)
* fix typo

* fix
2024-09-13 11:44:42 +08:00
Ruonan Wang
3f0b24ae2b
update cpp quickstart (#12075)
* update cpp quickstart

* fix style
2024-09-13 11:35:32 +08:00
Ruonan Wang
48d9092b5a
upgrade OneAPI version for cpp Windows (#12063)
* update version

* update quickstart
2024-09-12 11:12:12 +08:00
Shaojun Liu
e5581e6ded
Select the Appropriate APT Repository Based on CPU Type (#12023) 2024-09-05 17:06:07 +08:00
Yuwen Hu
643458d8f0
Update GraphRAG QuickStart (#11995)
* Update GraphRAG QuickStart

* Further updates

* Small fixes

* Small fix
2024-09-03 15:52:08 +08:00
Jinhe
e895e1b4c5
modification on llamacpp readme after Ipex-llm latest update (#11971)
* update on readme after ipex-llm update

* update on readme after ipex-llm update

* rebase & delete redundancy

* revise

* add numbers for troubleshooting
2024-08-30 11:36:45 +08:00
Ch1y0q
77b04efcc5
add notes for SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS (#11936)
* add notes for `SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS`

* also update other quickstart
2024-08-30 09:26:47 +08:00
Jinhe
6fc9340d53
restore ollama webui quickstart (#11955) 2024-08-29 17:53:19 +08:00
Jinhe
ec67ee7177
added accelerate version specification in open webui quickstart(#11948) 2024-08-28 15:02:39 +08:00
Ruonan Wang
460bc96d32
update version of llama.cpp / ollama (#11930)
* update version

* fix version
2024-08-27 21:21:44 +08:00
Ch1y0q
5a8fc1baa2
update troubleshooting for llama.cpp and ollama (#11890)
* update troubleshooting for llama.cpp and ollama

* update

* update
2024-08-26 20:55:23 +08:00
Jinhe
dbd14251dd
Troubleshoot for sycl not found (#11774)
* added troubleshoot for sycl not found problem

* added troubleshoot for sycl not found problem

* revision on troubleshoot

* revision on troubleshoot
2024-08-14 10:26:01 +08:00
Shaojun Liu
fac4c01a6e
Revert to use out-of-tree GPU driver (#11761)
* Revert to use out-of-tree GPU driver since the performance with out-of-tree driver is better than upsteam's

* add spaces

* add troubleshooting case

* update Troubleshooting
2024-08-12 13:41:47 +08:00
Yuwen Hu
7e61fa1af7
Revise GPU driver related guide in for Windows users (#11740) 2024-08-08 11:26:26 +08:00
Jinhe
d0c89fb715
updated llama.cpp and ollama quickstart (#11732)
* updated llama.cpp and ollama quickstart.md

* added qwen2-1.5B sample output

* revision on quickstart updates

* revision on quickstart updates

* revision on qwen2 readme

* added 2 troubleshoots“
”

* troubleshoot revision
2024-08-08 11:04:01 +08:00
Qiyuan Gong
e32d13d78c
Remove Out of tree Driver from GPU driver installation document (#11728)
GPU drivers are already upstreamed to Kernel 6.2+. Remove the out-of-tree driver (intel-i915-dkms) for 6.2-6.5. https://dgpu-docs.intel.com/driver/kernel-driver-types.html#gpu-driver-support
* Remove intel-i915-dkms intel-fw-gpu (only for kernel 5.19)
2024-08-07 09:38:19 +08:00
Jason Dai
418640e466
Update install_gpu.md 2024-07-27 08:30:10 +08:00
Ruonan Wang
ac97b31664
update cpp quickstart about ONEAPI_DEVICE_SELECTOR (#11630)
* update

* update

* small fix
2024-07-22 13:40:28 +08:00
Yuwen Hu
af6d406178
Add section title for conduct graphrag indexing (#11628) 2024-07-22 10:23:26 +08:00
Ruonan Wang
4da93709b1
update doc/setup to use onednn gemm for cpp (#11598)
* update doc/setup to use onednn gemm

* small fix

* Change TOC of graphrag quickstart back
2024-07-18 13:04:38 +08:00
Yuwen Hu
f06d2f72fb
Add GraphRAG QuickStart (#11582)
* Add framework for graphrag quickstart

* Add quickstart contents for graphrag

* Small fixes and add toc

* Update for graph

* Small fixes
2024-07-16 09:27:54 +08:00
Xin Qiu
91409ffe8c
Add mtl AOT packages in faq.md (#11577)
* Update faq.md

* Update faq.md

* Update faq.md

* Update faq.md

* Update faq.md
2024-07-16 08:46:03 +08:00
binbin Deng
66f6ffe4b2
Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
Shaojun Liu
72b4efaad4
Enhanced XPU Dockerfiles: Optimized Environment Variables and Documentation (#11506)
* Added SYCL_CACHE_PERSISTENT=1 to xpu Dockerfile

* Update the document to add explanations for environment variables.

* update quickstart
2024-07-04 20:18:38 +08:00
Yuwen Hu
1638573f56
Update llama cpp quickstart regarding windows prerequisites to avoid misleading (#11490) 2024-07-02 16:15:47 +08:00
SichengStevenLi
86b81c09d9
Table of Contents in Quickstart Files (#11437)
* fixed a minor grammar mistake

* added table of contents

* added table of contents

* changed table of contents indexing

* added table of contents

* added table of contents, changed grammar

* added table of contents

* added table of contents

* added table of contents

* added table of contents

* added table of contents

* added table of contents, modified chapter numbering

* fixed troubleshooting section redirection path

* added table of contents

* added table of contents, modified section numbering

* added table of contents, modified section numbering

* added table of contents

* added table of contents, changed title size, modified numbering

* added table of contents, changed section title size and capitalization

* added table of contents, modified section numbering

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents syntax

* changed table of contents capitalization issue

* changed table of contents capitalization issue

* changed table of contents location

* changed table of contents

* changed table of contents

* changed section capitalization

* removed comments

* removed comments

* removed comments
2024-06-28 10:41:00 +08:00
Yuwen Hu
a45ceac4e4
Update main readme for missing quickstarts (#11427)
* Update main readme to add missing quckstart

* Update quickstart index page

* Small fixes

* Small fix
2024-06-26 13:51:42 +08:00
Yuwen Hu
ecb9efde65
Workaround if demo preview image load slow in mddocs (#11412)
* Small tests for demo video workaround

* Small fix

* Add workaround for langchain-chatchat demo video

* Small fix

* Small fix

* Update for other demo videos in quickstart

* Add missing for text-generation-webui quickstart
2024-06-24 16:17:50 +08:00
Yuwen Hu
ccb3fb357a
Add mddocs index (#11411) 2024-06-24 15:35:18 +08:00
Shengsheng Huang
475b0213d2
README update (API doc and FAQ and minor fixes) (#11397)
* add faq and API doc link in README.md

* add missing quickstart link

* update links in FAQ

* update links in FAQ

* update faq

* update faq text
2024-06-21 19:46:32 +08:00
Yuwen Hu
2004fe1a43
Small fix (#11395) 2024-06-21 17:45:10 +08:00
Yuwen Hu
4cb9a4728e
Add index page for API doc & links update in mddocs (#11393)
* Small fixes

* Add initial api doc index

* Change index.md -> README.md

* Fix on API links
2024-06-21 17:34:34 +08:00
Xu, Shuo
b200e11e21
Add initial python api doc in mddoc (2/2) (#11388)
* add PyTorch-API.md

* small change

* small change

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-06-21 17:15:05 +08:00
Yuwen Hu
aafd6d55cd
Add initial python api doc in mddoc (1/2) (#11389)
* Add initial python api mddoc

* Fix based on comments
2024-06-21 17:14:42 +08:00
Yuwen Hu
a027121530
Small mddoc fixed based on review (#11391)
* Fix based on review

* Further fix

* Small fix

* Small fix
2024-06-21 17:09:30 +08:00
Yuwen Hu
54f9d07d8f
Further mddocs fixes (#11386)
* Update mddocs for ragflow quickstart

* Fixes for docker guides mddocs

* Further fixes
2024-06-21 13:27:43 +08:00
ivy-lv11
21fc781fce
Add GLM-4V example (#11343)
* add example

* modify

* modify

* add line

* add

* add link and replace with phi-3-vision template

* fix generate options

* fix

* fix

---------

Co-authored-by: jinbridge <2635480475@qq.com>
2024-06-21 12:54:31 +08:00
Yuwen Hu
9b475c07db
Add missing ragflow quickstart in mddocs and update legecy contents (#11385) 2024-06-21 12:28:26 +08:00
Xu, Shuo
fed79f106b
Update mddocs for DockerGuides (#11380)
* transfer files in DockerGuides from rst to md

* add some dividing lines

* adjust the title hierarchy in docker_cpp_xpu_quickstart.md

* restore

* switch to the correct branch

* small change

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-06-21 12:10:35 +08:00
SichengStevenLi
1a1a97c9e4
Update mddocs for part of Overview (2/2) and Inference (#11377)
* updated link

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed

* converted to md format, need to be reviewed, deleted some leftover texts

* converted to md file type, need to be reviewed

* converted to md file type, need to be reviewed

* testing Github Tags

* testing Github Tags

* added Github Tags

* added Github Tags

* added Github Tags

* Small fix

* Small fix

* Small fix

* Small fix

* Small fix

* Further fix

* Fix index

* Small fix

* Fix

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-06-21 12:07:50 +08:00
Zijie Li
33b9a9c4c9
Update part of Overview guide in mddocs (1/2) (#11378)
* Create install.md

* Update install_cpu.md

* Delete original docs/mddocs/Overview/install_cpu.md

* Update install_cpu.md

* Update install_gpu.md

* update llm.md and install.md

* Update docs in KeyFeatures

* Review and fix typos

* Fix on folded NOTE

* Small fix

* Small fix

* Remove empty known_issue.md

* Small fix

* Small fix

* Further fix

* Fixes

* Fix

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-06-21 10:45:17 +08:00
Jin Qiao
9a3a21e4fc
Update part of Quickstart guide in mddocs (2/2) (#11376)
* axolotl_quickstart.md

* benchmark_quickstart.md

* bigdl_llm_migration.md

* chatchat_quickstart.md

* continue_quickstart.md

* deepspeed_autotp_fastapi_quickstart.md

* dify_quickstart.md

* fastchat_quickstart.md

* adjust tab style

* fix link

* fix link

* add video preview

* Small fixes

* Small fix

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-06-20 19:03:06 +08:00
Yuwen Hu
8c9f877171
Update part of Quickstart guide in mddocs (1/2)
* Quickstart index.rst -> index.md

* Update for Linux Install Quickstart

* Update md docs for Windows Install QuickStart

* Small fix

* Add blank lines

* Update mddocs for llama cpp quickstart

* Update mddocs for llama3 llama-cpp and ollama quickstart

* Update mddocs for ollama quickstart

* Update mddocs for openwebui quickstart

* Update mddocs for privateGPT quickstart

* Update mddocs for vllm quickstart

* Small fix

* Update mddocs for text-generation-webui quickstart

* Update for video links
2024-06-20 18:43:23 +08:00
Yuwen Hu
d9dd1b70bd
Remove example page in mddocs (#11373) 2024-06-20 14:23:43 +08:00
Yuwen Hu
769728c1eb
Add initial md docs (#11371) 2024-06-20 13:47:49 +08:00
Shengsheng Huang
9601fae5d5
fix system note (#11368) 2024-06-20 11:09:53 +08:00
Shengsheng Huang
ed4c439497
small fix (#11366) 2024-06-20 10:38:20 +08:00
Shengsheng Huang
a721c1ae43
minor fix of ragflow_quickstart.md (#11364) 2024-06-19 22:30:33 +08:00
Shengsheng Huang
13727635e8
revise ragflow quickstart (#11363)
* revise ragflow quickstart

* update titles and split the quickstart into sections

* update
2024-06-19 22:24:31 +08:00
Zijie Li
5283df0078
LLM: Add RAGFlow with Ollama Example QuickStart (#11338)
* Create ragflow.md

* Update ragflow.md

* Update ragflow_quickstart

* Update ragflow_quickstart.md

* Upload RAGFlow quickstart without images

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* fix typos in readme

* Fix typos in quickstart readme
2024-06-19 20:00:50 +08:00
Jason Dai
271d82a4fc
Update readme (#11357) 2024-06-19 10:05:42 +08:00
Xiangyu Tian
f6cd628cd8
Fix script usage in vLLM CPU Quickstart (#11353) 2024-06-18 16:50:48 +08:00
Guancheng Fu
c9b4cadd81
fix vLLM/docker issues (#11348)
* fix

* fix

* ffix
2024-06-18 16:23:53 +08:00
hxsz1997
44f22cba70
add config and default value (#11344)
* add config and default value

* add config in taml

* remove lookahead and max_matching_ngram_size in config

* remove streaming and use_fp16_torch_dtype in test yaml

* update task in readme

* update commit of task
2024-06-18 15:28:57 +08:00
Shengsheng Huang
1f39bb84c7
update readthedocs perf data (#11345) 2024-06-18 13:23:47 +08:00
Qiyuan Gong
de4bb97b4f
Remove accelerate 0.23.0 install command in readme and docker (#11333)
*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。
2024-06-17 17:52:12 +08:00
Yuwen Hu
9e4d87a696
Langchain-chatchat QuickStart small link fix (#11317) 2024-06-14 14:02:17 +08:00
Yuwen Hu
bfab294f08
Update langchain-chatchat QuickStart to include Core Ultra iGPU Linux Guide (#11302) 2024-06-13 15:09:55 +08:00
Shengsheng Huang
ea372cc472
update demos section (#11298)
* update demos section

* update format
2024-06-13 11:58:19 +08:00
Jin Qiao
f224e98297
Add GLM-4 CPU example (#11223)
* Add GLM-4 example

* add tiktoken dependency

* fix

* fix
2024-06-12 15:30:51 +08:00
Yuwen Hu
8c36b5bdde
Add qwen2 example (#11252)
* Add GPU example for Qwen2

* Update comments in README

* Update README for Qwen2 GPU example

* Add CPU example for Qwen2

Sample Output under README pending

* Update generate.py and README for CPU Qwen2

* Update GPU example for Qwen2

* Small update

* Small fix

* Add Qwen2 table

* Update README for Qwen2 CPU and GPU

Update sample output under README

---------

Co-authored-by: Zijie Li <michael20001122@gmail.com>
2024-06-07 10:29:33 +08:00
Zijie Li
bfa1367149
Add CPU and GPU example for MiniCPM (#11202)
* Change installation address

Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example

* Change Prompt

Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence

* Create and update model minicpm

* Update model minicpm

Update model minicpm under GPU/PyTorch-Models

* Update readme and generate.py

change "prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)" and delete "pip install transformers==4.37.0
"

* Update comments for minicpm GPU

Update comments for generate.py at minicpm GPU

* Add CPU example for MiniCPM

* Update minicpm README for CPU

* Update README for MiniCPM and Llama3

* Update Readme for Llama3 CPU Pytorch

* Update and fix comments for MiniCPM
2024-06-05 18:09:53 +08:00
Xu, Shuo
a27a559650
Add some information in FAQ to help users solve "RuntimeError: could not create a primitive" error on Windows (#11221)
* Add some information to help users to solve "could not create a primitive" error in Windows.

* Small update

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-06-05 17:57:42 +08:00
Guancheng Fu
3ef4aa98d1
Refine vllm_quickstart doc (#11199)
* refine doc

* refine
2024-06-04 18:46:27 +08:00
Xiangyu Tian
ff83fad400
Fix typo in vLLM CPU docker guide (#11188) 2024-06-03 15:55:27 +08:00
Shaojun Liu
401013a630
Remove chatglm_C Module to Eliminate LGPL Dependency (#11178)
* remove chatglm_C.**.pyd to solve ngsolve weak copyright vunl

* fix style check error

* remove chatglm native int4 from langchain
2024-05-31 17:03:11 +08:00
Yuwen Hu
f0aaa130a9
Update miniconda/anaconda -> miniforge in documentation (#11176)
* Update miniconda/anaconda -> miniforge in installation guide

* Update for all Quickstart

* further fix for docs
2024-05-30 17:40:18 +08:00
Jin Qiao
dcbf4d3d0a
Add phi-3-vision example (#11156)
* Add phi-3-vision example (HF-Automodels)

* fix

* fix

* fix

* Add phi-3-vision CPU example (HF-Automodels)

* add in readme

* fix

* fix

* fix

* fix

* use fp8 for gpu example

* remove eval
2024-05-30 10:02:47 +08:00
Wang, Jian4
8e25de1126
LLM: Add codegeex2 example (#11143)
* add codegeex example

* update

* update cpu

* add GPU

* add gpu

* update readme
2024-05-29 10:00:26 +08:00
Ruonan Wang
83bd9cb681
add new version for cpp quickstart and keep an old version (#11151)
* add new version

* meet review
2024-05-28 15:29:34 +08:00
Guancheng Fu
daf7b1cd56
[Docker] Fix image using two cards error (#11144)
* fix all

* done
2024-05-27 16:20:13 +08:00
Jason Dai
34dab3b4ef
Update readme (#11141) 2024-05-27 15:41:02 +08:00
Guancheng Fu
fabc395d0d
add langchain vllm interface (#11121)
* done

* fix

* fix

* add vllm

* add langchain vllm exampels

* add docs

* temp
2024-05-24 17:19:27 +08:00
Shaojun Liu
85491907f3
Update GIF link (#11119) 2024-05-24 14:26:18 +08:00
Xiangyu Tian
1291165720
LLM: Add quickstart for vLLM cpu (#11122)
Add quickstart for vLLM cpu.
2024-05-24 10:21:21 +08:00
Xiangyu Tian
b3f6faa038
LLM: Add CPU vLLM entrypoint (#11083)
Add CPU vLLM entrypoint and update CPU vLLM serving example.
2024-05-24 09:16:59 +08:00
Shengsheng Huang
7ed270a4d8
update readme docker section, fix quickstart title, remove chs figure (#11044)
* update readme and fix quickstart title, remove chs figure

* update readme according to comment

* reorganize the docker guide structure
2024-05-24 00:18:20 +08:00
Zhao Changmin
15d906a97b
Update linux igpu run script (#11098)
* update run script
2024-05-22 17:18:07 +08:00
Guancheng Fu
4fd1df9cf6
Add toc for docker quickstarts (#11095)
* fix

* fix
2024-05-22 11:23:22 +08:00
Zhao Changmin
bf0f904e66
Update level_zero on MTL linux (#11085)
* Update level_zero on MTL
---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-05-22 11:01:56 +08:00
Shaojun Liu
8fdc8fb197
Quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU (#11070)
* add quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU

* add gif

* update index.rst

* update link

* update GIFs
2024-05-22 09:29:42 +08:00
Guancheng Fu
f654f7e08c
Add serving docker quickstart (#11072)
* add temp file

* add initial docker readme

* temp

* done

* add fastchat service

* fix

* fix

* fix

* fix

* remove stale file
2024-05-21 17:00:58 +08:00
binbin Deng
7170dd9192
Update guide for running qwen with AutoTP (#11065) 2024-05-20 10:53:17 +08:00
Wang, Jian4
a2e1578fd9
Merge tgi_api_server to main (#11036)
* init

* fix style

* speculative can not use benchmark

* add tgi server readme
2024-05-20 09:15:03 +08:00
Yuwen Hu
f60565adc7
Fix toc for vllm serving quickstart (#11068) 2024-05-17 17:12:48 +08:00
Guancheng Fu
dfac168d5f
fix format/typo (#11067) 2024-05-17 16:52:17 +08:00
Guancheng Fu
67db925112
Add vllm quickstart (#10978)
* temp

* add doc

* finish

* done

* fix

* add initial docker readme

* temp

* done fixing vllm_quickstart

* done

* remove not used file

* add

* fix
2024-05-17 16:16:42 +08:00
ZehuaCao
56cb992497
LLM: Modify CPU Installation Command for most examples (#11049)
* init

* refine

* refine

* refine

* modify hf-agent example

* modify all CPU model example

* remove readthedoc modify

* replace powershell with cmd

* fix repo

* fix repo

* update

* remove comment on windows code block

* update

* update

* update

* update

---------

Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
2024-05-17 15:52:20 +08:00
Shaojun Liu
84239d0bd3
Update docker image tags in Docker Quickstart (#11061)
* update docker image tag to latest

* add note

* simplify note

* add link in reStructuredText

* minor fix

* update tag
2024-05-17 11:06:11 +08:00
Xiangyu Tian
d963e95363
LLM: Modify CPU Installation Command for documentation (#11042)
* init

* refine

* refine

* refine

* refine comments
2024-05-17 10:14:00 +08:00
Wang, Jian4
00d4410746
Update cpp docker quickstart (#11040)
* add sample output

* update link

* update

* update header

* update
2024-05-16 14:55:13 +08:00
Ruonan Wang
1d73fc8106
update cpp quickstart (#11031) 2024-05-15 14:33:36 +08:00
Wang, Jian4
86cec80b51
LLM: Add llm inference_cpp_xpu_docker (#10933)
* test_cpp_docker

* update

* update

* update

* update

* add sudo

* update nodejs version

* no need npm

* remove blinker

* new cpp docker

* restore

* add line

* add manually_build

* update and add mtl

* update for workdir llm

* add benchmark part

* update readme

* update 1024-128

* update readme

* update

* fix

* update

* update

* update readme too

* update readme

* no change

* update dir_name

* update readme
2024-05-15 11:10:22 +08:00
Yuwen Hu
c34f85e7d0
[Doc] Simplify installation on Windows for Intel GPU (#11004)
* Simplify GPU installation guide regarding windows Prerequisites

* Update Windows install quickstart on Intel GPU

* Update for llama.cpp quickstart

* Update regarding minimum driver version

* Small fix

* Update based on comments

* Small fix
2024-05-15 09:55:41 +08:00
Shengsheng Huang
0b7e78b592
revise the benchmark part in python inference docker (#11020) 2024-05-14 18:43:41 +08:00
Shengsheng Huang
586a151f9c
update the README and reorganize the docker guides structure. (#11016)
* update the README and reorganize the docker guides structure.

* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
Qiyuan Gong
c957ea3831
Add axolotl main support and axolotl Llama-3-8B QLoRA example (#10984)
* Support axolotl main (796a085).
* Add axolotl Llama-3-8B QLoRA example.
* Change `sequence_len` to 256 for alpaca, and revert `lora_r` value.
* Add example to quick_start.
2024-05-14 13:43:59 +08:00
Shaojun Liu
7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00
Ruonan Wang
04d5a900e1
update troubleshooting of llama.cpp (#10990)
* update troubleshooting

* small update
2024-05-13 11:18:38 +08:00
Yuwen Hu
9f6358e4c2
Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 (#10986)
* Remove xpu_2.0 option in setup.py

* Disable xpu_2.0 test in UT and nightly

* Update docs for deprecated pytorch 2.0

* Small doc update
2024-05-11 12:33:35 +08:00
Ruonan Wang
5e0872073e
add version for llama.cpp and ollama (#10982)
* add version for cpp

* meet review
2024-05-11 09:20:31 +08:00
Ruonan Wang
b7f7d05a7e
update llama.cpp usage of llama3 (#10975)
* update llama.cpp usage of llama3

* fix
2024-05-09 16:44:12 +08:00
Shengsheng Huang
e3159c45e4
update private gpt quickstart and a small fix for dify (#10969) 2024-05-09 13:57:45 +08:00
Shengsheng Huang
11df5f9773
revise private GPT quickstart and a few fixes for other quickstart (#10967) 2024-05-08 21:18:20 +08:00
Keyan (Kyrie) Zhang
37820e1d86
Add privateGPT quickstart (#10932)
* Add privateGPT quickstart

* Update privateGPT_quickstart.md

* Update _toc.yml

* Update _toc.yml

---------

Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-05-08 20:48:00 +08:00
Wang, Jian4
f4c615b1ee
Add cohere example (#10954)
* add link first

* add_cpu_example

* add GPU example
2024-05-08 17:19:59 +08:00
Xiangyu Tian
02870dc385
LLM: Refine README of AutoTP-FastAPI example (#10960) 2024-05-08 16:55:23 +08:00
Qiyuan Gong
164e6957af
Refine axolotl quickstart (#10957)
* Add default accelerate config for axolotl quickstart.
* Fix requirement link.
* Upgrade peft to 0.10.0 in requirement.
2024-05-08 09:34:02 +08:00
hxsz1997
245c7348bc
Add codegemma example (#10884)
* add codegemma example in GPU/HF-Transformers-AutoModels/

* add README of codegemma example in GPU/HF-Transformers-AutoModels/

* add codegemma example in GPU/PyTorch-Models/

* add readme of codegemma example in GPU/PyTorch-Models/

* add codegemma example in CPU/HF-Transformers-AutoModels/

* add readme of codegemma example in CPU/HF-Transformers-AutoModels/

* add codegemma example in CPU/PyTorch-Models/

* add readme of codegemma example in CPU/PyTorch-Models/

* fix typos

* fix filename typo

* add codegemma in tables

* add comments of lm_head

* remove comments of use_cache
2024-05-07 13:35:42 +08:00
Shengsheng Huang
d649236321
make images clickable (#10939) 2024-05-06 20:24:15 +08:00
Shengsheng Huang
64938c2ca7
Dify quickstart revision (#10938)
* revise dify quickstart guide

* update quick links and a small typo
2024-05-06 19:59:17 +08:00
Ruonan Wang
3f438495e4
update llama.cpp and ollama quickstart (#10929) 2024-05-06 15:01:06 +08:00
Wang, Jian4
0e0bd309e2
LLM: Enable Speculative on Fastchat (#10909)
* init

* enable streamer

* update

* update

* remove deprecated

* update

* update

* add gpu example
2024-05-06 10:06:20 +08:00
Zhicun
8379f02a74
Add Dify quickstart (#10903)
* add quick start

* modify

* modify

* add

* add

* resize

* add mp4

* add vedio

* add video

* video

* add

* modify

* add

* modify
2024-05-06 10:01:34 +08:00
Shengsheng Huang
c78a8e3677
update quickstart (#10923) 2024-04-30 18:19:31 +08:00
Shengsheng Huang
282d676561
update continue quickstart (#10922) 2024-04-30 17:51:21 +08:00
Yuwen Hu
71f51ce589
Initial Update for Continue Quickstart with Ollama backend (#10918)
* Initial continue quickstart with ollama backend updates

* Small fix

* Small fix
2024-04-30 15:10:30 +08:00
Jin Qiao
1f876fd837
Add example for phi-3 (#10881)
* Add example for phi-3

* add in readme and index

* fix

* fix

* fix

* fix indent

* fix
2024-04-29 16:43:55 +08:00
Shaojun Liu
d058f2b403
Fix apt install oneapi scripts (#10891)
* Fix apt install oneapi scripts

* add intel-oneapi-mkl-devel

* add apt pkgs
2024-04-26 16:39:37 +08:00