hxsz1997
245c7348bc
Add codegemma example ( #10884 )
...
* add codegemma example in GPU/HF-Transformers-AutoModels/
* add README of codegemma example in GPU/HF-Transformers-AutoModels/
* add codegemma example in GPU/PyTorch-Models/
* add readme of codegemma example in GPU/PyTorch-Models/
* add codegemma example in CPU/HF-Transformers-AutoModels/
* add readme of codegemma example in CPU/HF-Transformers-AutoModels/
* add codegemma example in CPU/PyTorch-Models/
* add readme of codegemma example in CPU/PyTorch-Models/
* fix typos
* fix filename typo
* add codegemma in tables
* add comments of lm_head
* remove comments of use_cache
2024-05-07 13:35:42 +08:00
Xiangyu Tian
13a44cdacb
LLM: Refine Deepspped-AutoTP-FastAPI example ( #10916 )
2024-05-07 09:37:31 +08:00
Wang, Jian4
1de878bee1
LLM: Fix speculative llama3 long input error ( #10934 )
2024-05-07 09:25:20 +08:00
Guancheng Fu
2c64754eb0
Add vLLM to ipex-llm serving image ( #10807 )
...
* add vllm
* done
* doc work
* fix done
* temp
* add docs
* format
* add start-fastchat-service.sh
* fix
2024-04-29 17:25:42 +08:00
Jin Qiao
1f876fd837
Add example for phi-3 ( #10881 )
...
* Add example for phi-3
* add in readme and index
* fix
* fix
* fix
* fix indent
* fix
2024-04-29 16:43:55 +08:00
Xiangyu Tian
3d4950b0f0
LLM: Enable batch generate (world_size>1) in Deepspeed-AutoTP-FastAPI example ( #10876 )
...
Enable batch generate (world_size>1) in Deepspeed-AutoTP-FastAPI example.
2024-04-26 13:24:28 +08:00
Yang Wang
1ce8d7bcd9
Support the desc_act feature in GPTQ model ( #10851 )
...
* support act_order
* update versions
* fix style
* fix bug
* clean up
2024-04-24 10:17:13 -07:00
binbin Deng
fabf54e052
LLM: make pipeline parallel inference example more common ( #10786 )
2024-04-24 09:28:52 +08:00
hxsz1997
328b1a1de9
Fix the not stop issue of llama3 examples ( #10860 )
...
* fix not stop issue in GPU/HF-Transformers-AutoModels
* fix not stop issue in GPU/PyTorch-Models/Model/llama3
* fix not stop issue in CPU/HF-Transformers-AutoModels/Model/llama3
* fix not stop issue in CPU/PyTorch-Models/Model/llama3
* update the output in readme
* update format
* add reference
* update prompt format
* update output format in readme
* update example output in readme
2024-04-23 19:10:09 +08:00
ZehuaCao
36eb8b2e96
Add llama3 speculative example ( #10856 )
...
* Initial llama3 speculative example
* update README
* update README
* update README
2024-04-23 17:03:54 +08:00
ZehuaCao
92ea54b512
Fix speculative decoding bug ( #10855 )
2024-04-23 14:28:31 +08:00
Wang, Jian4
18c032652d
LLM: Add mixtral speculative CPU example ( #10830 )
...
* init mixtral sp example
* use different prompt_format
* update output
* update
2024-04-23 10:05:51 +08:00
Qiyuan Gong
5494aa55f6
Downgrade datasets in axolotl example ( #10849 )
...
* Downgrade datasets to 2.15.0 to address axolotl prepare issue https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
Tks to @kwaa for providing the solution in https://github.com/intel-analytics/ipex-llm/issues/10821#issuecomment-2068861571
2024-04-23 09:41:58 +08:00
Guancheng Fu
47bd5f504c
[vLLM]Remove vllm-v1, refactor v2 ( #10842 )
...
* remove vllm-v1
* fix format
2024-04-22 17:51:32 +08:00
Wang, Jian4
23c6a52fb0
LLM: Fix ipex torchscript=True error ( #10832 )
...
* remove
* update
* remove torchscript
2024-04-22 15:53:09 +08:00
Heyang Sun
fc33aa3721
fix missing import ( #10839 )
2024-04-22 14:34:52 +08:00
Guancheng Fu
ae3b577537
Update README.md ( #10833 )
2024-04-22 11:07:10 +08:00
Wang, Jian4
5f95054f97
LLM:Add qwen moe example libs md ( #10828 )
2024-04-22 10:03:19 +08:00
Guancheng Fu
61c67af386
Fix vLLM-v2 install instructions( #10822 )
2024-04-22 09:02:48 +08:00
Yang Wang
8153c3008e
Initial llama3 example ( #10799 )
...
* Add initial hf huggingface GPU example
* Small fix
* Add llama3 gpu pytorch model example
* Add llama 3 hf transformers CPU example
* Add llama 3 pytorch model CPU example
* Fixes
* Small fix
* Small fixes
* Small fix
* Small fix
* Add links
* update repo id
* change prompt tuning url
* remove system header if there is no system prompt
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
Co-authored-by: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com>
2024-04-18 11:01:33 -07:00
Qiyuan Gong
e90e31719f
axolotl lora example ( #10789 )
...
* Add axolotl lora example
* Modify readme
* Add comments in yml
2024-04-18 16:38:32 +08:00
Guancheng Fu
cbe7b5753f
Add vLLM[xpu] related code ( #10779 )
...
* Add ipex-llm side change
* add runable offline_inference
* refactor to call vllm2
* Verified async server
* add new v2 example
* add README
* fix
* change dir
* refactor readme.md
* add experimental
* fix
2024-04-18 15:29:20 +08:00
Ziteng Zhang
ff040c8f01
LISA Finetuning Example ( #10743 )
...
* enabling xetla only supports qtype=SYM_INT4 or FP8E5
* LISA Finetuning Example on gpu
* update readme
* add licence
* Explain parameters of lisa & Move backend codes to src dir
* fix style
* fix style
* update readme
* support chatglm
* fix style
* fix style
* update readme
* fix
2024-04-18 13:48:10 +08:00
Heyang Sun
581ebf6104
GaLore Finetuning Example ( #10722 )
...
* GaLore Finetuning Example
* Update README.md
* Update README.md
* change data to HuggingFaceH4/helpful_instructions
* Update README.md
* Update README.md
* shrink train size and delete cache before starting training to save memory
* Update README.md
* Update galore_finetuning.py
* change model to llama2 3b
* Update README.md
2024-04-18 13:47:41 +08:00
Yina Chen
ea5b373a97
Add lookahead GPU example ( #10785 )
...
* Add lookahead example
* fix style & attn mask
* fix typo
* address comments
2024-04-17 17:41:55 +08:00
ZehuaCao
0646e2c062
Fix short prompt for IPEX_CPU speculative decoding cause no_attr error ( #10783 )
2024-04-17 16:19:57 +08:00
Cengguang Zhang
7ec82c6042
LLM: add README.md for Long-Context examples. ( #10765 )
...
* LLM: add readme to long-context examples.
* add precision.
* update wording.
* add GPU type.
* add Long-Context example to GPU examples.
* fix comments.
* update max input length.
* update max length.
* add output length.
* fix wording.
2024-04-17 15:34:59 +08:00
Qiyuan Gong
9e5069437f
Fix gradio version in axolotl example ( #10776 )
...
* Change to gradio>=4.19.2
2024-04-17 10:23:43 +08:00
Qiyuan Gong
f2e923b3ca
Axolotl v0.4.0 support ( #10773 )
...
* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml
2024-04-17 09:49:11 +08:00
Heyang Sun
26cae0a39c
Update FLEX in Deepspeed README ( #10774 )
...
* Update FLEX in Deepspeed README
* Update README.md
2024-04-17 09:28:24 +08:00
Qiyuan Gong
d30b22a81b
Refine axolotl 0.3.0 documents and links ( #10764 )
...
* Refine axolotl 0.3 based on comments
* Rename requirements to requirement-xpu
* Add comments for paged_adamw_32bit
* change lora_r from 8 to 16
2024-04-16 14:47:45 +08:00
ZehuaCao
599a88db53
Add deepsped-autoTP-Fastapi serving ( #10748 )
...
* add deepsped-autoTP-Fastapi serving
* add readme
* add license
* update
* update
* fix
2024-04-16 14:03:23 +08:00
Jin Qiao
73a67804a4
GPU configuration update for examples (windows pip installer, etc.) ( #10762 )
...
* renew chatglm3-6b gpu example readme
fix
fix
fix
* fix for comments
* fix
* fix
* fix
* fix
* fix
* apply on HF-Transformers-AutoModels
* apply on PyTorch-Models
* fix
* fix
2024-04-15 17:42:52 +08:00
yb-peng
b5209d3ec1
Update example/GPU/PyTorch-Models/Model/llava/README.md ( #10757 )
...
* Update example/GPU/PyTorch-Models/Model/llava/README.md
* Update README.md
fix path in windows installation
2024-04-15 13:01:37 +08:00
Jiao Wang
9e668a5bf0
fix_internlm-chat-7b-8k repo name in examples ( #10747 )
2024-04-12 10:15:48 -07:00
Keyan (Kyrie) Zhang
1256a2cc4e
Add chatglm3 long input example ( #10739 )
...
* Add long context input example for chatglm3
* Small fix
* Small fix
* Small fix
2024-04-11 16:33:43 +08:00
Qiyuan Gong
2d64630757
Remove transformers version in axolotl example ( #10736 )
...
* Remove transformers version in axolotl requirements.txt
2024-04-11 14:02:31 +08:00
Xiangyu Tian
301504aa8d
Fix transformers version warning ( #10732 )
2024-04-11 13:12:49 +08:00
Shaojun Liu
29bf28bd6f
Upgrade python to 3.11 in Docker Image ( #10718 )
...
* install python 3.11 for cpu-inference docker image
* update xpu-inference dockerfile
* update cpu-serving image
* update qlora image
* update lora image
* update document
2024-04-10 14:41:27 +08:00
Qiyuan Gong
b727767f00
Add axolotl v0.3.0 with ipex-llm on Intel GPU ( #10717 )
...
* Add axolotl v0.3.0 support on Intel GPU.
* Add finetune example on llama-2-7B with Alpaca dataset.
2024-04-10 14:38:29 +08:00
Jiao Wang
d1eaea509f
update chatglm readme ( #10659 )
2024-04-09 14:24:46 -07:00
Jiao Wang
878a97077b
Fix llava example to support transformerds 4.36 ( #10614 )
...
* fix llava example
* update
2024-04-09 13:47:07 -07:00
Jiao Wang
1e817926ba
Fix low memory generation example issue in transformers 4.36 ( #10702 )
...
* update cache in low memory generate
* update
2024-04-09 09:56:52 -07:00
Shaojun Liu
f37a1f2a81
Upgrade to python 3.11 ( #10711 )
...
* create conda env with python 3.11
* recommend to use Python 3.11
* update
2024-04-09 17:41:17 +08:00
Cengguang Zhang
6a32216269
LLM: add llama2 8k input example. ( #10696 )
...
* LLM: add llama2-32K example.
* refactor name.
* fix comments.
* add IPEX_LLM_LOW_MEM notes and update sample output.
2024-04-09 16:02:37 +08:00
Keyan (Kyrie) Zhang
1e27e08322
Modify example from fp32 to fp16 ( #10528 )
...
* Modify example from fp32 to fp16
* Remove Falcon from fp16 example for now
* Remove MPT from fp16 example
2024-04-09 15:45:49 +08:00
binbin Deng
d9a1153b4e
LLM: upgrade deepspeed in AutoTP on GPU ( #10647 )
2024-04-07 14:05:19 +08:00
Zhicun
9d8ba64c0d
Llamaindex: add tokenizer_id and support chat ( #10590 )
...
* add tokenizer_id
* fix
* modify
* add from_model_id and from_mode_id_low_bit
* fix typo and add comment
* fix python code style
---------
Co-authored-by: pengyb2001 <284261055@qq.com>
2024-04-07 13:51:34 +08:00
Jin Qiao
10ee786920
Replace with IPEX-LLM in example comments ( #10671 )
...
* Replace with IPEX-LLM in example comments
* More replacement
* revert some changes
2024-04-07 13:29:51 +08:00
Jiao Wang
69bdbf5806
Fix vllm print error message issue ( #10664 )
...
* update chatglm readme
* Add condition to invalidInputError
* update
* update
* style
2024-04-05 15:08:13 -07:00