binbin Deng
367de141f2
Fix mixtral-8x7b with transformers=4.37.0 ( #11132 )
2024-05-27 09:50:54 +08:00
Jean Yu
ab476c7fe2
Eagle Speculative Sampling examples ( #11104 )
...
* Eagle Speculative Sampling examples
* rm multi-gpu and ray content
* updated README to include Arc A770
2024-05-24 11:13:43 -07:00
Guancheng Fu
fabc395d0d
add langchain vllm interface ( #11121 )
...
* done
* fix
* fix
* add vllm
* add langchain vllm exampels
* add docs
* temp
2024-05-24 17:19:27 +08:00
ZehuaCao
63e95698eb
[LLM]Reopen autotp generate_stream ( #11120 )
...
* reopen autotp generate_stream
* fix style error
* update
2024-05-24 17:16:14 +08:00
Yishuo Wang
1dc680341b
fix phi-3-vision import ( #11129 )
2024-05-24 15:57:15 +08:00
Guancheng Fu
7f772c5a4f
Add half precision for fastchat models ( #11130 )
2024-05-24 15:41:14 +08:00
Zhao Changmin
65f4212f89
Fix qwen 14b run into register attention fwd ( #11128 )
...
* fix qwen 14b
2024-05-24 14:45:07 +08:00
Shaojun Liu
373f9e6c79
add ipex-llm-init.bat for Windows ( #11082 )
...
* add ipex-llm-init.bat for Windows
* update setup.py
2024-05-24 14:26:25 +08:00
Shaojun Liu
85491907f3
Update GIF link ( #11119 )
2024-05-24 14:26:18 +08:00
Qiyuan Gong
120a0035ac
Fix type mismatch in eval for Baichuan2 QLora example ( #11117 )
...
* During the evaluation stage, Baichuan2 will raise type mismatch when training with bfloat16. Fix this issue by modifying modeling_baichuan.py. Add doc about how to modify this file.
2024-05-24 14:14:30 +08:00
Qiyuan Gong
21a1a973c1
Remove axolotl and python3-blinker ( #11127 )
...
* Remove axolotl from image to reduce image size.
* Remove python3-blinker to avoid axolotl lib conflict.
2024-05-24 13:54:19 +08:00
Yishuo Wang
1db9d9a63b
optimize internlm2 xcomposer agin ( #11124 )
2024-05-24 13:44:52 +08:00
Yishuo Wang
9372ce87ce
fix internlm xcomposer2 fp16 ( #11123 )
2024-05-24 11:03:31 +08:00
Cengguang Zhang
011b9faa5c
LLM: unify baichuan2-13b alibi mask dtype with model dtype. ( #11107 )
...
* LLM: unify alibi mask dtype.
* fix comments.
2024-05-24 10:27:53 +08:00
Jiao Wang
0a06a6e1d4
Update tests for transformers 4.36 ( #10858 )
...
* update unit test
* update
* update
* update
* update
* update
* fix gpu attention test
* update
* update
* update
* update
* update
* update
* update example test
* replace replit code
* update
* update
* update
* update
* set safe_serialization false
* perf test
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* delete
* update
* update
* update
* update
* update
* update
* revert
* update
2024-05-24 10:26:38 +08:00
Xiangyu Tian
1291165720
LLM: Add quickstart for vLLM cpu ( #11122 )
...
Add quickstart for vLLM cpu.
2024-05-24 10:21:21 +08:00
Wang, Jian4
1443b802cc
Docker:Fix building cpp_docker and remove unimportant dependencies ( #11114 )
...
* test build
* update
2024-05-24 09:49:44 +08:00
Xiangyu Tian
b3f6faa038
LLM: Add CPU vLLM entrypoint ( #11083 )
...
Add CPU vLLM entrypoint and update CPU vLLM serving example.
2024-05-24 09:16:59 +08:00
Shengsheng Huang
7ed270a4d8
update readme docker section, fix quickstart title, remove chs figure ( #11044 )
...
* update readme and fix quickstart title, remove chs figure
* update readme according to comment
* reorganize the docker guide structure
2024-05-24 00:18:20 +08:00
Yishuo Wang
797dbc48b8
fix phi-2 and phi-3 convert ( #11116 )
2024-05-23 17:37:37 +08:00
Yishuo Wang
37b98a531f
support running internlm xcomposer2 on gpu and add sdp optimization ( #11115 )
2024-05-23 17:26:24 +08:00
Zhao Changmin
c5e8b90c8d
Add Qwen register attention implemention ( #11110 )
...
* qwen_register
2024-05-23 17:17:45 +08:00
Yishuo Wang
0e53f20edb
support running internlm-xcomposer2 on cpu ( #11111 )
2024-05-23 16:36:09 +08:00
Shaojun Liu
e0f401d97d
FIX: APT Repository not working (signatures invalid) ( #11112 )
...
* chmod 644 gpg key
* chmod 644 gpg key
2024-05-23 16:15:45 +08:00
Yuwen Hu
d36b41d59e
Add setuptools limitation for ipex-llm[xpu] ( #11102 )
...
* Add setuptool limitation for ipex-llm[xpu]
* llamaindex option update
2024-05-22 18:20:30 +08:00
Yishuo Wang
cd4dff09ee
support phi-3 vision ( #11101 )
2024-05-22 17:43:50 +08:00
Zhao Changmin
15d906a97b
Update linux igpu run script ( #11098 )
...
* update run script
2024-05-22 17:18:07 +08:00
Kai Huang
f63172ef63
Align ppl with llama.cpp ( #11055 )
...
* update script
* remove
* add header
* update readme
2024-05-22 16:43:11 +08:00
Qiyuan Gong
f6c9ffe4dc
Add WANDB_MODE and HF_HUB_OFFLINE to XPU finetune README ( #11097 )
...
* Add WANDB_MODE=offline to avoid multi-GPUs finetune errors.
* Add HF_HUB_OFFLINE=1 to avoid Hugging Face related errors.
2024-05-22 15:20:53 +08:00
Yuwen Hu
1c5ed9b6cf
Fix arc ut ( #11096 )
2024-05-22 14:13:13 +08:00
Guancheng Fu
4fd1df9cf6
Add toc for docker quickstarts ( #11095 )
...
* fix
* fix
2024-05-22 11:23:22 +08:00
Shaojun Liu
584439e498
update homepage url for ipex-llm ( #11094 )
...
* update homepage url
* Update python version to 3.11
* Update long description
2024-05-22 11:10:44 +08:00
Zhao Changmin
bf0f904e66
Update level_zero on MTL linux ( #11085 )
...
* Update level_zero on MTL
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-05-22 11:01:56 +08:00
Shaojun Liu
8fdc8fb197
Quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU ( #11070 )
...
* add quickstart: Run/Develop PyTorch in VSCode with Docker on Intel GPU
* add gif
* update index.rst
* update link
* update GIFs
2024-05-22 09:29:42 +08:00
Xin Qiu
71bcd18f44
fix qwen vl ( #11090 )
2024-05-21 18:40:29 +08:00
Guancheng Fu
f654f7e08c
Add serving docker quickstart ( #11072 )
...
* add temp file
* add initial docker readme
* temp
* done
* add fastchat service
* fix
* fix
* fix
* fix
* remove stale file
2024-05-21 17:00:58 +08:00
Yishuo Wang
f00625f9a4
refactor qwen2 ( #11087 )
2024-05-21 16:53:42 +08:00
Qiyuan Gong
492ed3fd41
Add verified models to GPU finetune README ( #11088 )
...
* Add verified models to GPU finetune README
2024-05-21 15:49:15 +08:00
Qiyuan Gong
1210491748
ChatGLM3, Baichuan2 and Qwen1.5 QLoRA example ( #11078 )
...
* Add chatglm3, qwen15-7b and baichuan-7b QLoRA alpaca example
* Remove unnecessary tokenization setting.
2024-05-21 15:29:43 +08:00
binbin Deng
ecb16dcf14
Add deepspeed autotp support for xpu docker ( #11077 )
2024-05-21 14:49:54 +08:00
ZehuaCao
842d6dfc2d
Further Modify CPU example ( #11081 )
...
* modify CPU example
* update
2024-05-21 13:55:47 +08:00
Yishuo Wang
d830a63bb7
refactor qwen ( #11074 )
2024-05-20 18:08:37 +08:00
Wang, Jian4
74950a152a
Fix tgi_api_server error file name ( #11075 )
2024-05-20 16:48:40 +08:00
Yishuo Wang
4e97047d70
fix baichuan2 13b fp16 ( #11071 )
2024-05-20 11:21:20 +08:00
binbin Deng
7170dd9192
Update guide for running qwen with AutoTP ( #11065 )
2024-05-20 10:53:17 +08:00
Wang, Jian4
a2e1578fd9
Merge tgi_api_server to main ( #11036 )
...
* init
* fix style
* speculative can not use benchmark
* add tgi server readme
2024-05-20 09:15:03 +08:00
Yuwen Hu
f60565adc7
Fix toc for vllm serving quickstart ( #11068 )
2024-05-17 17:12:48 +08:00
Guancheng Fu
dfac168d5f
fix format/typo ( #11067 )
2024-05-17 16:52:17 +08:00
Yishuo Wang
31ce3e0c13
refactor baichuan2-13b ( #11064 )
2024-05-17 16:25:30 +08:00
Guancheng Fu
67db925112
Add vllm quickstart ( #10978 )
...
* temp
* add doc
* finish
* done
* fix
* add initial docker readme
* temp
* done fixing vllm_quickstart
* done
* remove not used file
* add
* fix
2024-05-17 16:16:42 +08:00