* add codegemma example in GPU/HF-Transformers-AutoModels/ * add README of codegemma example in GPU/HF-Transformers-AutoModels/ * add codegemma example in GPU/PyTorch-Models/ * add readme of codegemma example in GPU/PyTorch-Models/ * add codegemma example in CPU/HF-Transformers-AutoModels/ * add readme of codegemma example in CPU/HF-Transformers-AutoModels/ * add codegemma example in CPU/PyTorch-Models/ * add readme of codegemma example in CPU/PyTorch-Models/ * fix typos * fix filename typo * add codegemma in tables * add comments of lm_head * remove comments of use_cache  | 
			||
|---|---|---|
| .. | ||
| Model | ||
| More-Data-Types | ||
| Save-Load | ||
| README.md | ||
Running PyTorch model using IPEX-LLM on Intel CPU
This folder contains examples of running any PyTorch model on IPEX-LLM (with "one-line code change"):
- Model: examples of running PyTorch models (e.g., Openai Whisper, LLaMA2, ChatGLM2, Falcon, MPT, Baichuan2, etc.) using INT4 optimizations
 - More-Data-Types: examples of applying other low bit optimizations (NF4/INT5/INT8, etc.)
 - Save-Load: examples of saving and loading low-bit models