ipex-llm/docs/readthedocs/source
Ruonan Wang a8df429985
QuickStart: Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM (#10809)
* initial commit

* update llama.cpp

* add demo video at first

* fix ollama link in readme

* meet review

* update

* small fix
2024-04-19 17:44:59 +08:00
..
_static Chronos: fix installation error of prophet (#8426) 2023-07-03 13:34:16 +08:00
_templates QuickStart: Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM (#10809) 2024-04-19 17:44:59 +08:00
doc QuickStart: Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM (#10809) 2024-04-19 17:44:59 +08:00
_toc.yml QuickStart: Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM (#10809) 2024-04-19 17:44:59 +08:00
analytics_zoo_pytext.py add docs 2021-10-12 11:06:44 +08:00
conf.py [Doc] IPEX-LLM Doc Layout Update (#10532) 2024-03-25 16:23:56 +08:00
index.rst Update readme (#10802) 2024-04-19 06:52:57 +08:00