* Add a new CPU example of Yuan2-2B-hf * Add a new CPU generate.py of Yuan2-2B-hf example * Add a new GPU example of Yuan2-2B-hf * Add Yuan2 to README table * In CPU example:1.Use English as default prompt; 2.Provide modified files in yuan2-2B-instruct * In GPU example:1.Use English as default prompt;2.Provide modified files * GPU example:update README * update Yuan2-2B-hf in README table * Add CPU example for Yuan2-2B in Pytorch-Models * Add GPU example for Yuan2-2B in Pytorch-Models * Add license in generate.py; Modify README * In GPU Add license in generate.py; Modify README * In CPU yuan2 modify README * In GPU yuan2 modify README * In CPU yuan2 modify README * In GPU example, updated the readme for Windows GPU supports * In GPU torch example, updated the readme for Windows GPU supports * GPU hf example README modified * GPU example README modified |
||
|---|---|---|
| .. | ||
| aquila | ||
| aquila2 | ||
| baichuan | ||
| baichuan2 | ||
| bluelm | ||
| chatglm2 | ||
| chatglm3 | ||
| chinese-llama2 | ||
| codellama | ||
| codeshell | ||
| distil-whisper | ||
| dolly-v1 | ||
| dolly-v2 | ||
| falcon | ||
| flan-t5 | ||
| gpt-j | ||
| internlm | ||
| internlm2 | ||
| llama2 | ||
| mistral | ||
| mixtral | ||
| mpt | ||
| phi-1_5 | ||
| phi-2 | ||
| phixtral | ||
| qwen | ||
| qwen-vl | ||
| qwen1.5 | ||
| redpajama | ||
| replit | ||
| rwkv4 | ||
| rwkv5 | ||
| solar | ||
| starcoder | ||
| vicuna | ||
| voiceassistant | ||
| whisper | ||
| yi | ||
| yuan2 | ||
| README.md | ||
BigDL-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs
You can use BigDL-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.