Skip to content

Commit e8e5fae

Browse files
authored
Merge pull request #1736 from ramalama-labs/feat/docs
Adds docs site
2 parents 2329f84 + 79c085b commit e8e5fae

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+22448
-15
lines changed

docs/ramalama-macos.7.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Configure Podman Machine on Mac for GPU Acceleration
44

5-
Leveraging GPU acceleration on a Mac with Podman requires the configurion of
5+
Leveraging GPU acceleration on a Mac with Podman requires the configuration of
66
the `libkrun` machine provider.
77

88
This can be done by either setting an environment variable or modifying the

docs/ramalama.1.md

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -27,18 +27,9 @@ RamaLama pulls AI Models from model registries. Starting a chatbot or a rest API
2727

2828
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed RamaLama attempts to run the model with software on the local system.
2929

30-
Note:
30+
Note: On MacOS systems that use Podman for containers, configure the Podman machine to use the `libkrun` machine provider. The `libkrun` provider enables containers within the Podman Machine access to the Mac's GPU. See **[ramalama-macos(7)](ramalama-macos.7.md)** for further information.
3131

32-
On MacOS systems that use Podman for containers, configure the Podman machine
33-
to use the `libkrun` machine provider. The `libkrun` provider enables
34-
containers within the Podman Machine access to the Mac's GPU.
35-
See **[ramalama-macos(7)](ramalama-macos.7.md)** for further information.
36-
37-
Note:
38-
39-
On systems with NVIDIA GPUs, see **[ramalama-cuda(7)](ramalama-cuda.7.md)** to correctly configure the host system.
40-
41-
Default settings for flags are defined in **[ramalama.conf(5)](ramalama.conf.5.md)**.
32+
Note: On systems with NVIDIA GPUs, see **[ramalama-cuda(7)](ramalama-cuda.7.md)** to correctly configure the host system. Default settings for flags are defined in **[ramalama.conf(5)](ramalama.conf.5.md)**.
4233

4334
## SECURITY
4435

@@ -93,7 +84,7 @@ the model. The following table specifies the order which RamaLama reads the file
9384
| Administrators | /etc/ramamala/shortnames.conf |
9485
| Users | $HOME/.config/ramalama/shortnames.conf |
9586

96-
```code
87+
```toml
9788
$ cat /usr/share/ramalama/shortnames.conf
9889
[shortnames]
9990
"tiny" = "ollama://tinyllama"

docs/readme/wsl2-podman-cuda.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
2626
2. **Update the packages list from the repository:**
2727
```bash
2828
sudo apt-get update
29+
```
2930
3. **Install the NVIDIA Container Toolkit packages:**
3031
```bash
3132
sudo apt-get install -y nvidia-container-toolkit
@@ -47,9 +48,9 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
4748
Open and edit the NVIDIA container runtime configuration:
4849
```bash
4950
nvidia-ctk cdi list
50-
```
51-
**We Should See Something Like This**
5251
```
52+
**We Should See Something Like This**
53+
```bash
5354
INFO[0000] Found 1 CDI devices
5455
nvidia.com/gpu=all
5556
```

docsite/.gitignore

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Dependencies
2+
/node_modules
3+
4+
# Production
5+
/build
6+
7+
# Generated files
8+
.docusaurus
9+
.cache-loader
10+
11+
# Misc
12+
.DS_Store
13+
.env.local
14+
.env.development.local
15+
.env.test.local
16+
.env.production.local
17+
18+
npm-debug.log*
19+
yarn-debug.log*
20+
yarn-error.log*

docsite/Makefile

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# RamaLama Documentation Site Makefile
2+
#
3+
# This Makefile provides commands for building and managing the RamaLama
4+
# documentation site powered by Docusaurus.
5+
6+
.PHONY: help convert dev build serve clean install
7+
8+
# Default target - show help
9+
help: ## Show this help message
10+
@echo "RamaLama Documentation Site"
11+
@echo "=========================="
12+
@echo
13+
@echo "Available commands:"
14+
@echo
15+
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf " \033[36m%-12s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
16+
@echo
17+
18+
convert: ## Convert manpages from ../docs to MDX format for docsite
19+
@echo "Converting manpages to MDX format..."
20+
@python3 convert_manpages.py
21+
@echo "✅ Manpage conversion complete!"
22+
23+
install: ## Install dependencies
24+
@echo "Installing dependencies..."
25+
@npm install
26+
@echo "✅ Dependencies installed!"
27+
28+
dev: ## Start the development server
29+
@echo "Starting development server..."
30+
@npm start
31+
32+
build: ## Build the production site
33+
@echo "Building production site..."
34+
@npm run build
35+
@echo "✅ Build complete! Output in ./build/"
36+
37+
serve: ## Serve the built site locally
38+
@echo "Serving built site locally..."
39+
@npm run serve
40+
41+
clean: ## Clean build artifacts and node_modules
42+
@echo "Cleaning build artifacts..."
43+
@rm -rf build .docusaurus node_modules
44+
@echo "✅ Clean complete!"
45+
46+
all: install convert build ## Install deps, convert manpages, and build site
47+
48+
# Development workflow targets
49+
quick-dev: convert dev ## Convert manpages and start dev server
50+
51+
rebuild: clean install convert build ## Full rebuild from scratch

0 commit comments

Comments
 (0)