Cengguang Zhang
cd369c2715
LLM: add device id to benchmark utils. ( #10877 )
2024-04-25 14:01:51 +08:00
Cengguang Zhang
eb39c61607
LLM: add min new token to perf test. ( #10869 )
2024-04-24 14:32:02 +08:00
yb-peng
c9dee6cd0e
Update 8192.txt ( #10824 )
...
* Update 8192.txt
* Update 8192.txt with original text
2024-04-23 14:02:09 +08:00
Wang, Jian4
23c6a52fb0
LLM: Fix ipex torchscript=True error ( #10832 )
...
* remove
* update
* remove torchscript
2024-04-22 15:53:09 +08:00
Kai Huang
053ec30737
Transformers ppl evaluation on wikitext ( #10784 )
...
* tranformers code
* cache
2024-04-18 15:27:18 +08:00
hxsz1997
0d518aab8d
Merge pull request #10697 from MargarettMao/ceval
...
combine english and chinese, remove nan
2024-04-12 14:37:47 +08:00
jenniew
cdbb1de972
Mark Color Modification
2024-04-12 14:00:50 +08:00
yb-peng
2685c41318
Modify all-in-one benchmark ( #10726 )
...
* Update 8192 prompt in all-in-one
* Add cpu_embedding param for linux api
* Update run.py
* Update README.md
2024-04-11 13:38:50 +08:00
Wenjing Margaret Mao
289cc99cd6
Update README.md ( #10700 )
...
Edit "summarize the results"
2024-04-09 16:01:12 +08:00
Wenjing Margaret Mao
d3116de0db
Update README.md ( #10701 )
...
edit "summarize the results"
2024-04-09 15:50:25 +08:00
Chen, Zhentao
d59e0cce5c
Migrate harness to ipexllm ( #10703 )
...
* migrate to ipexlm
* fix workflow
* fix run_multi
* fix precision map
* rename ipexlm to ipexllm
* rename bigdl to ipex in comments
2024-04-09 15:48:53 +08:00
jenniew
591bae092c
combine english and chinese, remove nan
2024-04-08 19:37:51 +08:00
yb-peng
2d88bb9b4b
add test api transformer_int4_fp16_gpu ( #10627 )
...
* add test api transformer_int4_fp16_gpu
* update config.yaml and README.md in all-in-one
* modify run.py in all-in-one
* re-order test-api
* re-order test-api in config
* modify README.md in all-in-one
* modify README.md in all-in-one
* modify config.yaml
---------
Co-authored-by: pengyb2001 <arda@arda-arc21.sh.intel.com>
Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
2024-04-07 15:47:17 +08:00
Wang, Jian4
9ad4b29697
LLM: CPU benchmark using tcmalloc ( #10675 )
2024-04-07 14:17:01 +08:00
binbin Deng
d9a1153b4e
LLM: upgrade deepspeed in AutoTP on GPU ( #10647 )
2024-04-07 14:05:19 +08:00
binbin Deng
27be448920
LLM: add cpu_embedding and peak memory record for deepspeed autotp script ( #10621 )
2024-04-02 17:32:50 +08:00
Shaojun Liu
a10f5a1b8d
add python style check ( #10620 )
...
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
2024-04-02 16:17:56 +08:00
Ruonan Wang
d6af4877dd
LLM: remove ipex.optimize for gpt-j ( #10606 )
...
* remove ipex.optimize
* fix
* fix
2024-04-01 12:21:49 +08:00
WeiguangHan
fbeb10c796
LLM: Set different env based on different Linux kernels ( #10566 )
2024-03-27 17:56:33 +08:00
Ruonan Wang
ea4bc450c4
LLM: add esimd sdp for pvc ( #10543 )
...
* add esimd sdp for pvc
* update
* fix
* fix batch
2024-03-26 19:04:40 +08:00
Shaojun Liu
c563b41491
add nightly_build workflow ( #10533 )
...
* add nightly_build workflow
* add create-job-status-badge action
* update
* update
* update
* update setup.py
* release
* revert
2024-03-26 12:47:38 +08:00
Wang, Jian4
16b2ef49c6
Update_document by heyang ( #30 )
2024-03-25 10:06:02 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm ( #24 )
...
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
binbin Deng
85ef3f1d99
LLM: add empty cache in deepspeed autotp benchmark script ( #10488 )
2024-03-21 10:51:23 +08:00
Xiangyu Tian
5a5fd5af5b
LLM: Add speculative benchmark on CPU/XPU ( #10464 )
...
Add speculative benchmark on CPU/XPU.
2024-03-21 09:51:06 +08:00
Xiangyu Tian
cbe24cc7e6
LLM: Enable BigDL IPEX Int8 ( #10480 )
...
Enable BigDL IPEX Int8
2024-03-20 15:59:54 +08:00
Jin Qiao
e41d556436
LLM: change fp16 benchmark to model.half ( #10477 )
...
* LLM: change fp16 benchmark to model.half
* fix
2024-03-20 13:38:39 +08:00
Jin Qiao
e9055c32f9
LLM: fix fp16 mem record in benchmark ( #10461 )
...
* LLM: fix fp16 mem record in benchmark
* change style
2024-03-19 16:17:23 +08:00
Jin Qiao
0451103a43
LLM: add int4+fp16 benchmark script for windows benchmarking ( #10449 )
...
* LLM: add fp16 for benchmark script
* remove transformer_int4_fp16_loadlowbit_gpu_win
2024-03-19 11:11:25 +08:00
Yuxuan Xia
f36224aac4
Fix ceval run.sh ( #10410 )
2024-03-14 10:57:25 +08:00
Wang, Jian4
0193f29411
LLM : Enable gguf float16 and Yuan2 model ( #10372 )
...
* enable float16
* add yun files
* enable yun
* enable set low_bit on yuan2
* update
* update license
* update generate
* update readme
* update python style
* update
2024-03-13 10:19:18 +08:00
Xiangyu Tian
0ded0b4b13
LLM: Enable BigDL IPEX optimization for int4 ( #10319 )
...
Enable BigDL IPEX optimization for int4
2024-03-12 17:08:50 +08:00
Lilac09
5809a3f5fe
Add run-hbm.sh & add user guide for spr and hbm ( #10357 )
...
* add run-hbm.sh
* add spr and hbm guide
* only support quad mode
* only support quad mode
* update special cases
* update special cases
2024-03-12 16:15:27 +08:00
binbin Deng
5d996a5caf
LLM: add benchmark script for deepspeed autotp on gpu ( #10380 )
2024-03-12 15:19:57 +08:00
WeiguangHan
17bdb1a60b
LLM: add whisper models into nightly test ( #10193 )
...
* LLM: add whisper models into nightly test
* small fix
* small fix
* add more whisper models
* test all cases
* test specific cases
* collect the csv
* store the resut
* to html
* small fix
* small test
* test all cases
* modify whisper_csv_to_html
2024-03-11 20:00:47 +08:00
Yuxuan Xia
0c8d3c9830
Add C-Eval HTML report ( #10294 )
...
* Add C-Eval HTML report
* Fix C-Eval workflow pr trigger path
* Fix C-Eval workflow typos
* Add permissions to C-Eval workflow
* Fix C-Eval workflow typo
* Add pandas dependency
* Fix C-Eval workflow typo
2024-03-07 16:44:49 +08:00
Shaojun Liu
178eea5009
upload bigdl-llm wheel to sourceforge for backup ( #10321 )
...
* test: upload to sourceforge
* update scripts
* revert
2024-03-05 16:36:01 +08:00
WeiguangHan
fd81d66047
LLM: Compress some models to save space ( #10315 )
...
* LLM: compress some models to save space
* add deleted comments
2024-03-04 17:53:03 +08:00
Yuwen Hu
27d9a14989
[LLM] all-on-one update: memory optimize and streaming output ( #10302 )
...
* Memory saving for continous in-out pair run and add support for streaming output on MTL iGPU
* Small fix
* Small fix
* Add things back
2024-03-01 18:02:30 +08:00
Keyan (Kyrie) Zhang
59861f73e5
Add Deepseek-6.7B ( #9991 )
...
* Add new example Deepseek
* Add new example Deepseek
* Add new example Deepseek
* Add new example Deepseek
* Add new example Deepseek
* modify deepseek
* modify deepseek
* Add verified model in README
* Turn cpu_embedding=True in Deepseek example
---------
Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-02-28 11:36:39 +08:00
hxsz1997
cba61a2909
Add html report of ppl ( #10218 )
...
* remove include and language option, select the corresponding dataset based on the model name in Run
* change the nightly test time
* change the nightly test time of harness and ppl
* save the ppl result to json file
* generate csv file and print table result
* generate html
* modify the way to get parent folder
* update html in parent folder
* add llm-ppl-summary and llm-ppl-summary-html
* modify echo single result
* remove download fp16.csv
* change model name of PR
* move ppl nightly related files to llm/test folder
* reformat
* seperate make_table from make_table_and_csv.py
* separate make_csv from make_table_and_csv.py
* update llm-ppl-html
* remove comment
* add Download fp16.results
2024-02-27 17:37:08 +08:00
Chen, Zhentao
213ef06691
fix readme
2024-02-24 00:38:08 +08:00
Chen, Zhentao
6fe5344fa6
separate make_csv from the file
2024-02-23 16:33:38 +08:00
Chen, Zhentao
bfa98666a6
fall back to make_table.py
2024-02-23 16:33:38 +08:00
Chen, Zhentao
f315c7f93a
Move harness nightly related files to llm/test folder ( #10209 )
...
* move harness nightly files to test folder
* change workflow file path accordingly
* use arc01 when pr
* fix path
* fix fp16 csv path
2024-02-23 11:12:36 +08:00
Yuwen Hu
21de2613ce
[LLM] Add model loading time record for all-in-one benchmark ( #10201 )
...
* Add model loading time record in csv for all-in-one benchmark
* Small fix
* Small fix to number after .
2024-02-22 13:57:18 +08:00
Yuxuan Xia
7cbc2429a6
Fix C-Eval ChatGLM loading issue ( #10206 )
...
* Add c-eval workflow and modify running files
* Modify the chatglm evaluator file
* Modify the ceval workflow for triggering test
* Modify the ceval workflow file
* Modify the ceval workflow file
* Modify ceval workflow
* Adjust the ceval dataset download
* Add ceval workflow dependencies
* Modify ceval workflow dataset download
* Add ceval test dependencies
* Add ceval test dependencies
* Correct the result print
* Fix the nightly test trigger time
* Fix ChatGLM loading issue
2024-02-22 10:00:43 +08:00
yb-peng
b1a97b71a9
Harness eval: Add is_last parameter and fix logical operator in highlight_vals ( #10192 )
...
* Add is_last parameter and fix logical operator in highlight_vals
* Add script to update HTML files in parent folder
* Add running update_html_in_parent_folder.py in summarize step
* Add licence info
* Remove update_html_in_parent_folder.py in Summarize the results for pull request
2024-02-21 14:45:32 +08:00
Chen, Zhentao
39d37bd042
upgrade harness package version in workflow ( #10188 )
...
* upgrade harness
* update readme
2024-02-21 11:21:30 +08:00
Yuwen Hu
001c13243e
[LLM] Add support for low_low_bit benchmark on Windows GPU ( #10167 )
...
* Add support for low_low_bit performance test on Windows GPU
* Small fix
* Small fix
* Save memory during converting model process
* Drop the results for first time when loading in low bit on mtl igpu for better performance
* Small fix
2024-02-21 10:51:52 +08:00