Shengsheng Huang
1f39bb84c7
update readthedocs perf data ( #11345 )
2024-06-18 13:23:47 +08:00
Qiyuan Gong
de4bb97b4f
Remove accelerate 0.23.0 install command in readme and docker ( #11333 )
...
*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。
2024-06-17 17:52:12 +08:00
Yuwen Hu
9e4d87a696
Langchain-chatchat QuickStart small link fix ( #11317 )
2024-06-14 14:02:17 +08:00
Yuwen Hu
bfab294f08
Update langchain-chatchat QuickStart to include Core Ultra iGPU Linux Guide ( #11302 )
2024-06-13 15:09:55 +08:00
Shengsheng Huang
ea372cc472
update demos section ( #11298 )
...
* update demos section
* update format
2024-06-13 11:58:19 +08:00
Jin Qiao
f224e98297
Add GLM-4 CPU example ( #11223 )
...
* Add GLM-4 example
* add tiktoken dependency
* fix
* fix
2024-06-12 15:30:51 +08:00
Yuwen Hu
8c36b5bdde
Add qwen2 example ( #11252 )
...
* Add GPU example for Qwen2
* Update comments in README
* Update README for Qwen2 GPU example
* Add CPU example for Qwen2
Sample Output under README pending
* Update generate.py and README for CPU Qwen2
* Update GPU example for Qwen2
* Small update
* Small fix
* Add Qwen2 table
* Update README for Qwen2 CPU and GPU
Update sample output under README
---------
Co-authored-by: Zijie Li <michael20001122@gmail.com>
2024-06-07 10:29:33 +08:00
Zijie Li
bfa1367149
Add CPU and GPU example for MiniCPM ( #11202 )
...
* Change installation address
Change former address: "https://docs.conda.io/en/latest/miniconda.html# " to new address: "https://conda-forge.org/download/ " for 63 occurrences under python\llm\example
* Change Prompt
Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence
* Create and update model minicpm
* Update model minicpm
Update model minicpm under GPU/PyTorch-Models
* Update readme and generate.py
change "prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)" and delete "pip install transformers==4.37.0
"
* Update comments for minicpm GPU
Update comments for generate.py at minicpm GPU
* Add CPU example for MiniCPM
* Update minicpm README for CPU
* Update README for MiniCPM and Llama3
* Update Readme for Llama3 CPU Pytorch
* Update and fix comments for MiniCPM
2024-06-05 18:09:53 +08:00
Xu, Shuo
a27a559650
Add some information in FAQ to help users solve "RuntimeError: could not create a primitive" error on Windows ( #11221 )
...
* Add some information to help users to solve "could not create a primitive" error in Windows.
* Small update
---------
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-06-05 17:57:42 +08:00
Guancheng Fu
3ef4aa98d1
Refine vllm_quickstart doc ( #11199 )
...
* refine doc
* refine
2024-06-04 18:46:27 +08:00
Xiangyu Tian
ff83fad400
Fix typo in vLLM CPU docker guide ( #11188 )
2024-06-03 15:55:27 +08:00
Shaojun Liu
401013a630
Remove chatglm_C Module to Eliminate LGPL Dependency ( #11178 )
...
* remove chatglm_C.**.pyd to solve ngsolve weak copyright vunl
* fix style check error
* remove chatglm native int4 from langchain
2024-05-31 17:03:11 +08:00
Yuwen Hu
f0aaa130a9
Update miniconda/anaconda -> miniforge in documentation ( #11176 )
...
* Update miniconda/anaconda -> miniforge in installation guide
* Update for all Quickstart
* further fix for docs
2024-05-30 17:40:18 +08:00
Jin Qiao
dcbf4d3d0a
Add phi-3-vision example ( #11156 )
...
* Add phi-3-vision example (HF-Automodels)
* fix
* fix
* fix
* Add phi-3-vision CPU example (HF-Automodels)
* add in readme
* fix
* fix
* fix
* fix
* use fp8 for gpu example
* remove eval
2024-05-30 10:02:47 +08:00
Wang, Jian4
8e25de1126
LLM: Add codegeex2 example ( #11143 )
...
* add codegeex example
* update
* update cpu
* add GPU
* add gpu
* update readme
2024-05-29 10:00:26 +08:00
Ruonan Wang
83bd9cb681
add new version for cpp quickstart and keep an old version ( #11151 )
...
* add new version
* meet review
2024-05-28 15:29:34 +08:00
Guancheng Fu
daf7b1cd56
[Docker] Fix image using two cards error ( #11144 )
...
* fix all
* done
2024-05-27 16:20:13 +08:00
Jason Dai
34dab3b4ef
Update readme ( #11141 )
2024-05-27 15:41:02 +08:00
Guancheng Fu
fabc395d0d
add langchain vllm interface ( #11121 )
...
* done
* fix
* fix
* add vllm
* add langchain vllm exampels
* add docs
* temp
2024-05-24 17:19:27 +08:00
Shaojun Liu
85491907f3
Update GIF link ( #11119 )
2024-05-24 14:26:18 +08:00
Xiangyu Tian
1291165720
LLM: Add quickstart for vLLM cpu ( #11122 )
...
Add quickstart for vLLM cpu.
2024-05-24 10:21:21 +08:00
Xiangyu Tian
b3f6faa038
LLM: Add CPU vLLM entrypoint ( #11083 )
...
Add CPU vLLM entrypoint and update CPU vLLM serving example.
2024-05-24 09:16:59 +08:00
Shengsheng Huang
7ed270a4d8
update readme docker section, fix quickstart title, remove chs figure ( #11044 )
...
* update readme and fix quickstart title, remove chs figure
* update readme according to comment
* reorganize the docker guide structure
2024-05-24 00:18:20 +08:00
Zhao Changmin
15d906a97b
Update linux igpu run script ( #11098 )
...
* update run script
2024-05-22 17:18:07 +08:00
Guancheng Fu
4fd1df9cf6
Add toc for docker quickstarts ( #11095 )
...
* fix
* fix
2024-05-22 11:23:22 +08:00
Zhao Changmin
bf0f904e66
Update level_zero on MTL linux ( #11085 )
...
* Update level_zero on MTL
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-05-22 11:01:56 +08:00
Shaojun Liu
8fdc8fb197
Quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU ( #11070 )
...
* add quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU
* add gif
* update index.rst
* update link
* update GIFs
2024-05-22 09:29:42 +08:00
Guancheng Fu
f654f7e08c
Add serving docker quickstart ( #11072 )
...
* add temp file
* add initial docker readme
* temp
* done
* add fastchat service
* fix
* fix
* fix
* fix
* remove stale file
2024-05-21 17:00:58 +08:00
binbin Deng
7170dd9192
Update guide for running qwen with AutoTP ( #11065 )
2024-05-20 10:53:17 +08:00
Wang, Jian4
a2e1578fd9
Merge tgi_api_server to main ( #11036 )
...
* init
* fix style
* speculative can not use benchmark
* add tgi server readme
2024-05-20 09:15:03 +08:00
Yuwen Hu
f60565adc7
Fix toc for vllm serving quickstart ( #11068 )
2024-05-17 17:12:48 +08:00
Guancheng Fu
dfac168d5f
fix format/typo ( #11067 )
2024-05-17 16:52:17 +08:00
Guancheng Fu
67db925112
Add vllm quickstart ( #10978 )
...
* temp
* add doc
* finish
* done
* fix
* add initial docker readme
* temp
* done fixing vllm_quickstart
* done
* remove not used file
* add
* fix
2024-05-17 16:16:42 +08:00
ZehuaCao
56cb992497
LLM: Modify CPU Installation Command for most examples ( #11049 )
...
* init
* refine
* refine
* refine
* modify hf-agent example
* modify all CPU model example
* remove readthedoc modify
* replace powershell with cmd
* fix repo
* fix repo
* update
* remove comment on windows code block
* update
* update
* update
* update
---------
Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
2024-05-17 15:52:20 +08:00
Shaojun Liu
84239d0bd3
Update docker image tags in Docker Quickstart ( #11061 )
...
* update docker image tag to latest
* add note
* simplify note
* add link in reStructuredText
* minor fix
* update tag
2024-05-17 11:06:11 +08:00
Xiangyu Tian
d963e95363
LLM: Modify CPU Installation Command for documentation ( #11042 )
...
* init
* refine
* refine
* refine
* refine comments
2024-05-17 10:14:00 +08:00
Wang, Jian4
00d4410746
Update cpp docker quickstart ( #11040 )
...
* add sample output
* update link
* update
* update header
* update
2024-05-16 14:55:13 +08:00
Ruonan Wang
1d73fc8106
update cpp quickstart ( #11031 )
2024-05-15 14:33:36 +08:00
Wang, Jian4
86cec80b51
LLM: Add llm inference_cpp_xpu_docker ( #10933 )
...
* test_cpp_docker
* update
* update
* update
* update
* add sudo
* update nodejs version
* no need npm
* remove blinker
* new cpp docker
* restore
* add line
* add manually_build
* update and add mtl
* update for workdir llm
* add benchmark part
* update readme
* update 1024-128
* update readme
* update
* fix
* update
* update
* update readme too
* update readme
* no change
* update dir_name
* update readme
2024-05-15 11:10:22 +08:00
Yuwen Hu
c34f85e7d0
[Doc] Simplify installation on Windows for Intel GPU ( #11004 )
...
* Simplify GPU installation guide regarding windows Prerequisites
* Update Windows install quickstart on Intel GPU
* Update for llama.cpp quickstart
* Update regarding minimum driver version
* Small fix
* Update based on comments
* Small fix
2024-05-15 09:55:41 +08:00
Shengsheng Huang
0b7e78b592
revise the benchmark part in python inference docker ( #11020 )
2024-05-14 18:43:41 +08:00
Shengsheng Huang
586a151f9c
update the README and reorganize the docker guides structure. ( #11016 )
...
* update the README and reorganize the docker guides structure.
* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
Qiyuan Gong
c957ea3831
Add axolotl main support and axolotl Llama-3-8B QLoRA example ( #10984 )
...
* Support axolotl main (796a085).
* Add axolotl Llama-3-8B QLoRA example.
* Change `sequence_len` to 256 for alpaca, and revert `lora_r` value.
* Add example to quick_start.
2024-05-14 13:43:59 +08:00
Shaojun Liu
7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) ( #10970 )
...
* add entrypoint.sh
* add quickstart
* remove entrypoint
* update
* Install related library of benchmarking
* update
* print out results
* update docs
* minor update
* update
* update quickstart
* update
* update
* update
* update
* update
* update
* add chat & example section
* add more details
* minor update
* rename quickstart
* update
* minor update
* update
* update config.yaml
* update readme
* use --gpu
* add tips
* minor update
* update
2024-05-14 12:58:31 +08:00
Ruonan Wang
04d5a900e1
update troubleshooting of llama.cpp ( #10990 )
...
* update troubleshooting
* small update
2024-05-13 11:18:38 +08:00
Yuwen Hu
9f6358e4c2
Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 ( #10986 )
...
* Remove xpu_2.0 option in setup.py
* Disable xpu_2.0 test in UT and nightly
* Update docs for deprecated pytorch 2.0
* Small doc update
2024-05-11 12:33:35 +08:00
Ruonan Wang
5e0872073e
add version for llama.cpp and ollama ( #10982 )
...
* add version for cpp
* meet review
2024-05-11 09:20:31 +08:00
Ruonan Wang
b7f7d05a7e
update llama.cpp usage of llama3 ( #10975 )
...
* update llama.cpp usage of llama3
* fix
2024-05-09 16:44:12 +08:00
Shengsheng Huang
e3159c45e4
update private gpt quickstart and a small fix for dify ( #10969 )
2024-05-09 13:57:45 +08:00
Shengsheng Huang
11df5f9773
revise private GPT quickstart and a few fixes for other quickstart ( #10967 )
2024-05-08 21:18:20 +08:00