From 85491907f34cbadfd39bf0ba4362231a44a2ca97 Mon Sep 17 00:00:00 2001
From: Shaojun Liu <61072813+liu-shaojun@users.noreply.github.com>
Date: Fri, 24 May 2024 14:26:18 +0800
Subject: [PATCH] Update GIF link (#11119)
---
.../docker_run_pytorch_inference_in_vscode.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md b/docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md
index b625ac6b..9a07609d 100644
--- a/docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md
+++ b/docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md
@@ -27,8 +27,8 @@ For both Linux/Windows, you will need to Install Dev Containers extension.
Open the Extensions view in VSCode (you can use the shortcut `Ctrl+Shift+X`), then search for and install the `Dev Containers` extension.
-
-
+
+
@@ -39,8 +39,8 @@ For Windows, you will need to install wsl extension to to the WSL environment. O
Press F1 to bring up the Command Palette and type in `WSL: Connect to WSL Using Distro...` and select it and then select a specific WSL distro `Ubuntu`
-
-
+
+
@@ -101,8 +101,8 @@ Press F1 to bring up the Command Palette and type in `Dev Containers: Attach t
Now you are in a running Docker Container, Open folder `/ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model/`.
-
-
+
+
In this folder, we provide several PyTorch examples that you could apply IPEX-LLM INT4 optimizations on models on Intel GPUs.