Skip to content

Commit 353e708

Browse files
authored
docs: update ggml and llama.cpp URLs (#931)
1 parent dd75fc0 commit 353e708

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ API and command-line option may change frequently.***
2929

3030
## Features
3131

32-
- Plain C/C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp)
32+
- Plain C/C++ implementation based on [ggml](https://github.com/ggml-org/ggml), working in the same way as [llama.cpp](https://github.com/ggml-org/llama.cpp)
3333
- Super lightweight and without external dependencies
3434
- Supported models
3535
- Image Models
@@ -165,7 +165,7 @@ Thank you to all the people who have already contributed to stable-diffusion.cpp
165165

166166
## References
167167

168-
- [ggml](https://github.com/ggerganov/ggml)
168+
- [ggml](https://github.com/ggml-org/ggml)
169169
- [diffusers](https://github.com/huggingface/diffusers)
170170
- [stable-diffusion](https://github.com/CompVis/stable-diffusion)
171171
- [sd3-ref](https://github.com/Stability-AI/sd3-ref)

docs/build.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ ninja
157157
158158
## Build with SYCL
159159
160-
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
160+
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
161161
162162
```shell
163163
# Export relevant ENV variables

0 commit comments

Comments
 (0)