ipex-llm/python/llm
Yang Wang 8153c3008e
Initial llama3 example (#10799)
* Add initial hf huggingface GPU example

* Small fix

* Add llama3 gpu pytorch model example

* Add llama 3 hf transformers CPU example

* Add llama 3 pytorch model CPU example

* Fixes

* Small fix

* Small fixes

* Small fix

* Small fix

* Add links

* update repo id

* change prompt tuning url

* remove system header if there is no system prompt

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
Co-authored-by: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com>
2024-04-18 11:01:33 -07:00
..
dev Transformers ppl evaluation on wikitext (#10784) 2024-04-18 15:27:18 +08:00
example Initial llama3 example (#10799) 2024-04-18 11:01:33 -07:00
portable-zip Fix baichuan-13b issue on portable zip under transformers 4.36 (#10746) 2024-04-12 16:27:01 -07:00
scripts Update Env check Script (#10709) 2024-04-10 15:06:00 +08:00
src/ipex_llm Fix pvc llama (#10798) 2024-04-18 10:44:57 -07:00
test edit 'ppl_result does not exist' issue, delete useless code (#10767) 2024-04-16 18:11:56 +08:00
.gitignore [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
setup.py Update setup.py for bigdl-core-xe-esimd-21 on Windows (#10705) 2024-04-09 18:21:21 +08:00
version.txt Update setup.py and add new actions and add compatible mode (#25) 2024-03-22 15:44:59 +08:00