Commit graph

69 commits

Author SHA1 Message Date
Guoqiong Song
5a6211fd56
fix minicpm for transformers>=4.39 (#11533)
* fix minicpm for transformers>=4.39
2024-07-18 15:01:57 -07:00
Guoqiong Song
d64711900a
Fix cohere model on transformers>=4.41 (#11575)
* fix cohere model for 4-41
2024-07-17 17:18:59 -07:00
Wang, Jian4
4390e7dc49
Fix codegeex2 transformers version (#11487) 2024-07-02 15:09:28 +08:00
Jiao Wang
40fa23560e
Fix LLAVA example on CPU (#11271)
* update

* update

* update

* update
2024-06-25 20:04:59 -07:00
Jin Qiao
0e7a31a09c
ChatGLM Examples Restructure regarding Installation Steps (#11285)
* merge install step in glm examples

* fix section

* fix section

* fix tiktoken
2024-06-14 12:37:05 +08:00
Jin Qiao
f224e98297
Add GLM-4 CPU example (#11223)
* Add GLM-4 example

* add tiktoken dependency

* fix

* fix
2024-06-12 15:30:51 +08:00
Yuwen Hu
8c36b5bdde
Add qwen2 example (#11252)
* Add GPU example for Qwen2

* Update comments in README

* Update README for Qwen2 GPU example

* Add CPU example for Qwen2

Sample Output under README pending

* Update generate.py and README for CPU Qwen2

* Update GPU example for Qwen2

* Small update

* Small fix

* Add Qwen2 table

* Update README for Qwen2 CPU and GPU

Update sample output under README

---------

Co-authored-by: Zijie Li <michael20001122@gmail.com>
2024-06-07 10:29:33 +08:00
Guoqiong Song
09c6780d0c
phi-2 transformers 4.37 (#11161)
* phi-2 transformers 4.37
2024-06-05 13:36:41 -07:00
Zijie Li
bfa1367149
Add CPU and GPU example for MiniCPM (#11202)
* Change installation address

Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example

* Change Prompt

Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence

* Create and update model minicpm

* Update model minicpm

Update model minicpm under GPU/PyTorch-Models

* Update readme and generate.py

change "prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)" and delete "pip install transformers==4.37.0
"

* Update comments for minicpm GPU

Update comments for generate.py at minicpm GPU

* Add CPU example for MiniCPM

* Update minicpm README for CPU

* Update README for MiniCPM and Llama3

* Update Readme for Llama3 CPU Pytorch

* Update and fix comments for MiniCPM
2024-06-05 18:09:53 +08:00
Zijie Li
a644e9409b
Miniconda/Anaconda -> Miniforge update in examples (#11194)
* Change installation address

Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example

* Change Prompt

Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence
2024-06-04 10:14:02 +08:00
Wang, Jian4
8e25de1126
LLM: Add codegeex2 example (#11143)
* add codegeex example

* update

* update cpu

* add GPU

* add gpu

* update readme
2024-05-29 10:00:26 +08:00
ZehuaCao
56cb992497
LLM: Modify CPU Installation Command for most examples (#11049)
* init

* refine

* refine

* refine

* modify hf-agent example

* modify all CPU model example

* remove readthedoc modify

* replace powershell with cmd

* fix repo

* fix repo

* update

* remove comment on windows code block

* update

* update

* update

* update

---------

Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
2024-05-17 15:52:20 +08:00
Wang, Jian4
f4c615b1ee
Add cohere example (#10954)
* add link first

* add_cpu_example

* add GPU example
2024-05-08 17:19:59 +08:00
hxsz1997
245c7348bc
Add codegemma example (#10884)
* add codegemma example in GPU/HF-Transformers-AutoModels/

* add README of codegemma example in GPU/HF-Transformers-AutoModels/

* add codegemma example in GPU/PyTorch-Models/

* add readme of codegemma example in GPU/PyTorch-Models/

* add codegemma example in CPU/HF-Transformers-AutoModels/

* add readme of codegemma example in CPU/HF-Transformers-AutoModels/

* add codegemma example in CPU/PyTorch-Models/

* add readme of codegemma example in CPU/PyTorch-Models/

* fix typos

* fix filename typo

* add codegemma in tables

* add comments of lm_head

* remove comments of use_cache
2024-05-07 13:35:42 +08:00
Jin Qiao
1f876fd837
Add example for phi-3 (#10881)
* Add example for phi-3

* add in readme and index

* fix

* fix

* fix

* fix indent

* fix
2024-04-29 16:43:55 +08:00
hxsz1997
328b1a1de9
Fix the not stop issue of llama3 examples (#10860)
* fix not stop issue in GPU/HF-Transformers-AutoModels

* fix not stop issue in GPU/PyTorch-Models/Model/llama3

* fix not stop issue in CPU/HF-Transformers-AutoModels/Model/llama3

* fix not stop issue in CPU/PyTorch-Models/Model/llama3

* update the output in readme

* update format

* add reference

* update prompt format

* update output format in readme

* update example output in readme
2024-04-23 19:10:09 +08:00
Wang, Jian4
5f95054f97
LLM:Add qwen moe example libs md (#10828) 2024-04-22 10:03:19 +08:00
Yang Wang
8153c3008e
Initial llama3 example (#10799)
* Add initial hf huggingface GPU example

* Small fix

* Add llama3 gpu pytorch model example

* Add llama 3 hf transformers CPU example

* Add llama 3 pytorch model CPU example

* Fixes

* Small fix

* Small fixes

* Small fix

* Small fix

* Add links

* update repo id

* change prompt tuning url

* remove system header if there is no system prompt

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
Co-authored-by: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com>
2024-04-18 11:01:33 -07:00
Jiao Wang
9e668a5bf0
fix_internlm-chat-7b-8k repo name in examples (#10747) 2024-04-12 10:15:48 -07:00
Jiao Wang
d1eaea509f
update chatglm readme (#10659) 2024-04-09 14:24:46 -07:00
Jiao Wang
878a97077b
Fix llava example to support transformerds 4.36 (#10614)
* fix llava example

* update
2024-04-09 13:47:07 -07:00
Shaojun Liu
f37a1f2a81
Upgrade to python 3.11 (#10711)
* create conda env with python 3.11

* recommend to use Python 3.11

* update
2024-04-09 17:41:17 +08:00
Jin Qiao
10ee786920
Replace with IPEX-LLM in example comments (#10671)
* Replace with IPEX-LLM in example comments

* More replacement

* revert some changes
2024-04-07 13:29:51 +08:00
Jin Qiao
cc8b3be11c
Add GPU and CPU example for stablelm-zephyr-3b (#10643)
* Add example for StableLM

* fix

* add to readme
2024-04-03 16:28:31 +08:00
Jiao Wang
654dc5ba57
Fix Qwen-VL example problem (#10582)
* update

* update

* update

* update
2024-04-02 12:17:30 -07:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm (#10554)
* fix ipex with ipex_llm

* fix ipex with ipex_llm

* update

* update

* update

* update

* update

* update

* update

* update
2024-03-28 13:54:40 +08:00
Wang, Jian4
16b2ef49c6
Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Jiao Wang
f3fefdc9ce fix pad_token_id issue (#10425) 2024-03-18 23:30:28 -07:00
Shengsheng Huang
db0d129226 Revert "Add rwkv example (#9432)" (#10264)
This reverts commit 6930422b42.
2024-02-28 11:48:31 +08:00
Yining Wang
6930422b42 Add rwkv example (#9432)
* codeshell fix wrong urls

* restart runner

* add RWKV CPU & GPU example (rwkv-4-world-7b)

* restart runner

* update submodule

* fix runner

* runner-test

---------

Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-02-28 11:41:00 +08:00
Keyan (Kyrie) Zhang
59861f73e5 Add Deepseek-6.7B (#9991)
* Add new example Deepseek

* Add new example Deepseek

* Add new example Deepseek

* Add new example Deepseek

* Add new example Deepseek

* modify deepseek

* modify deepseek

* Add verified model in README

* Turn cpu_embedding=True in Deepseek example

---------

Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-02-28 11:36:39 +08:00
Zheng, Yi
2347f611cf Add cpu and gpu examples of Mamba (#9797)
* Add mamba cpu example

* Add mamba gpu example

* Use a smaller model as the example

* minor fixes

---------

Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-02-28 11:33:29 +08:00
Zhao Changmin
937e1f7c74 rebase (#9104)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2024-02-28 11:18:21 +08:00
Zhicun
308e637d0d Add DeepSeek-MoE-16B-Chat (#10155)
* dsmoe-hf add

* add dsmoe pytorch

* update README

* modify comment

* remove GPU example

* update model name

* format code
2024-02-28 10:12:09 +08:00
Keyan (Kyrie) Zhang
843fe546b0 Add CPU and GPU examples for DeciLM-7B (#9867)
* Add cpu and gpu examples for DeciLM-7B

* Add cpu and gpu examples for DeciLM-7B

* Add DeciLM-7B to README table

* modify deciLM

* modify deciLM

* modify deciLM

* Add verified model in README

* Add cpu_embedding=True
2024-02-27 13:15:49 +08:00
yb-peng
a2c1675546 Add CPU and GPU examples for Yuan2-2B-hf (#9946)
* Add a new CPU example of Yuan2-2B-hf

* Add a new CPU generate.py of Yuan2-2B-hf example

* Add a new GPU example of Yuan2-2B-hf

* Add Yuan2 to README table

* In CPU example:1.Use English as default prompt; 2.Provide modified files in yuan2-2B-instruct

* In GPU example:1.Use English as default prompt;2.Provide modified files

* GPU example:update README

* update Yuan2-2B-hf in README table

* Add CPU example for Yuan2-2B in Pytorch-Models

* Add GPU example for Yuan2-2B in Pytorch-Models

* Add license in generate.py; Modify README

* In GPU Add license in generate.py; Modify README

* In CPU yuan2 modify README

* In GPU yuan2 modify README

* In CPU yuan2 modify README

* In GPU example, updated the readme for Windows GPU supports

* In GPU torch example, updated the readme for Windows GPU supports

* GPU hf example README modified

* GPU example README modified
2024-02-23 14:09:30 +08:00
yb-peng
f1f4094a09 Add CPU and GPU examples of phi-2 (#10014)
* Add CPU and GPU examples of phi-2

* In GPU hf example, updated the readme for Windows GPU supports

* In GPU torch example, updated the readme for Windows GPU supports

* update the table in BigDL/README.md

* update the table in BigDL/python/llm/README.md
2024-02-23 14:05:53 +08:00
Zhicun
c7e839e66c Add Qwen1.5-7B-Chat (#10113)
* add Qwen1.5-7B-Chat

* modify Qwen1.5 example

* update README

* update prompt format

* update folder name and example README

* add Chinese prompt sample output

* update link in README

* correct the link

* update transformer version
2024-02-21 13:29:29 +08:00
Zhicun
add3899311 Add ziya CPU example (#10114)
* ziya on CPU

* add README for ziya

* specify use_cache

* add arc CPU

* update prompt format

* update link

* add comments to emphasize use_cache

* update pip cmd
2024-02-20 13:59:52 +08:00
Zhicun
7d2be7994f add phixtral and optimize phi-moe (#10052) 2024-02-05 11:12:47 +08:00
ivy-lv11
428b7105f6 Add HF and PyTorch example InternLM2 (#10061) 2024-02-04 10:25:55 +08:00
Jinyi Wan
ec2d9de0ea Fix README.md for solar (#9957) 2024-01-24 15:50:54 +08:00
Jinyi Wan
07485eff5a Add SOLAR-10.7B to README (#9869) 2024-01-11 14:28:41 +08:00
Jinyi Wan
3147ebe63d Add cpu and gpu examples for SOLAR-10.7B (#9821) 2024-01-05 09:50:28 +08:00
binbin Deng
6584539c91 LLM: fix installation of codellama (#9813) 2024-01-02 14:32:50 +08:00
Qiyuan Gong
223c9622f7 [LLM] Mixtral CPU examples (#9673)
* Mixtral CPU PyTorch and hugging face examples, based on #9661 and #9671
2023-12-14 10:35:11 +08:00
Jinyi Wan
b721138132 Add cpu and gpu examples for BlueLM (#9589)
* Add cpu int4 example for BlueLM

* addexample optimize_model cpu for bluelm

* add example gpu int4 blueLM

* add example optimiza_model GPU for bluelm

* Fixing naming issues and BigDL package version.

* Fixing naming issues...

* Add BlueLM in README.md "Verified Models"
2023-12-05 13:59:02 +08:00
Qiyuan Gong
d85a430a8c Uing bigdl-llm-init instead of bigdl-nano-init (#9558)
* Replace `bigdl-nano-init` with `bigdl-llm-init`.
* Install `bigdl-llm` instead of `bigdl-nano`.
* Remove nano in README.
2023-11-30 10:10:29 +08:00
Zheng, Yi
9b5d0e9c75 Add examples for Yi-6B (#9421) 2023-11-13 10:53:15 +08:00