Marko Radojćić On AI from Serbia by Marko Radojčić

Civil Engineer and an experienced IT guy reflecting on AI. Personal blog

Huggingface Publish

Published on April 16, 2025

How to use published models in ollama with a custom script (GNU/Linux or WSL2):

setup_yugo_florida_ollama.sh

#!/bin/bash

set -e

MODEL_REPO="MarkoRadojcic/YugoGPT-Florida_Q4_0.GGUF"
MODEL_NAME="YugoGPT-Florida_Q4_0.gguf"
MODEL_DIR="yugo-florida"
OLLAMA_NAME="yugo-florida"
VENV_DIR="temp"

echo "[0/6] Creating and activating Python virtual environment..."
python3 -m venv $VENV_DIR
source $VENV_DIR/bin/activate

echo "[1/6] Installing huggingface_hub inside virtualenv..."
pip install --upgrade pip
pip install huggingface_hub

echo "[2/6] Downloading model from Hugging Face..."
huggingface-cli download $MODEL_REPO --local-dir $MODEL_DIR --local-dir-use-symlinks False

echo "[3/6] Creating Modelfile for Ollama (Mistral-based)..."
cat > $MODEL_DIR/Modelfile <<EOF
FROM mistral
PARAMETER num_ctx 4096
EOF

echo "[4/6] Building the model with Ollama..."
cd $MODEL_DIR
ollama create $OLLAMA_NAME -f Modelfile

echo "[5/6] Running the model..."
ollama run $OLLAMA_NAME

How to use:

chmod +x setup_yugo_florida_ollama.sh
./setup_yugo_florida_ollama.sh
deactivate

the “temp” virtual envirnonment can now be removed.