This repository contains a prompting strategy to enable tool/function calling for LLM workflows that do not support it explicitly.
Currently, it is focused on creating/updating files, but plan to support generic function calling in the future.
This system prompt instructs the model to render all code using the txtar format. You can then pipe the model's output to something like response2fs.py
to turn the txtar
output into filesystem operations.
You can evaluate different models using llm
and changing the model used in build_hello_response.sh
.
-
Install llm using uv
$ uv tool install llm
-
Install the MLX llm plugin
$ llm install llm-mlx
-
Download Mistral-Small-24B-Instruct-2501-4bit
$ llm mlx download-model mlx-community/Mistral-Small-24B-Instruct-2501-4bit
-
Run
make test
-
You should see
Hello, alice!
if all went well.