This commit is contained in:
unworriedsafari 2024-02-25 21:33:04 +00:00
parent 1a4b720c4f
commit 134c33a151
7 changed files with 276 additions and 285 deletions

110
README.md
View File

@ -1,6 +1,6 @@
# README
`mill.py v1.1.1`
`mill.py v1.1.2`
Markdown interface for [llama.cpp](//github.com/ggerganov/llama.cpp).
@ -10,7 +10,7 @@ Markdown interface for [llama.cpp](//github.com/ggerganov/llama.cpp).
1. [Python 3.x](//python.org) (tested on `3.11`)
2. [llama.cpp](//github.com/ggerganov/llama.cpp) (tested on `b1860`)
Developed and tested on Linux. I believe it should also work on Windows or Mac.
Developed and tested on Linux. I believe it could also work on Windows or Mac.
## Features
@ -145,48 +145,12 @@ Output:
>
## Adding support for other languages
Markdown support is included. To add another language:
1. Create a new Python module named `mill_lang_language_id` where all
non-alphanumeric characters of `language_id` are replaced by underscores.
2. Implement a `parse` function similar to the one in `mill_lang_markdown.py`.
3. Put your module anywhere on the Python path of `mill.py`.
4. When using the CLI interface, pass the `-l language_id` argument.
5. When using the CGI interface, pass the `language=language_id` query-string
parameter.
## Adding support for other LLMs
`llama.cpp` support is included. Adding support for LLMs is similar to adding
support for other languages:
1. Create a new Python module named `mill_llm_llm_id` where all
non-alphanumeric characters of the `llm_id` part are replaced by
underscores.
2. Implement a `generate` function similar to the one in
`mill_llm_llama_cpp.py`.
3. Put your module anywhere on the Python path of `mill.py`.
4. When using the CLI interface, pass the `-e llm_id` argument.
5. When using the CGI interface, pass the `llm_engine=llm_id` query-string
parameter.
## CLI install + usage
1. Clone the Git repo or download these files:
1. `mill_cli.py`
2. `mill.py`
3. `mill_lang_markdown.py`
4. `mill_llm_llama_cpp.py`
5. `mill_example_markdown_llama_cpp.py`
2. Put files 2-5 on the Python path of `mill_cli.py`. Easy solution: put all
files in the same folder.
3. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
1. Clone the Git repo or download a release tarball and unpack it.
2. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
`llama.cpp/main` or your wrapper around it.
4. Pipe your Markdown document to `mill_cli.py`.
3. Pipe your Markdown document to `mill_cli.py`.
```bash
export MILL_LLAMACPP_MAIN=/path/to/llama.cpp/main
@ -210,22 +174,16 @@ use `-h` for a usage description.
## CGI install + usage
1. Clone the Git repo or download these files:
1. `mill_cgi.py`
2. `mill.py`
3. `mill_lang_markdown.py`
4. `mill_llm_llama_cpp.py`
5. `mill_example_markdown_llama_cpp.py`
2. Put files 2-5 on the Python path of `mill_cgi.py`. Easy solution: put all
files in the same folder.
3. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
1. Clone the Git repo or download a release tarball and unpack it.
2. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
`llama.cpp/main` or your wrapper around it.
4. Start your CGI web server.
3. Start your CGI web server.
```bash
mkdir -pv public_html/cgi-bin
cp -v mill_cgi.py public_html/cgi-bin
cp -v mill.py public_html/cgi-bin
cp -v mill_readme.py public_html/cgi-bin
cp -v mill_lang_markdown.py public_html/cgi-bin
cp -v mill_llm_llama_cpp.py public_html/cgi-bin
cp -v mill_example_markdown_llama_cpp.py public_html/cgi-bin
@ -282,8 +240,8 @@ until it is assigned to again.
`mill.py` parses the text in a single pass from top-to-bottom and then calls
the LLM at the end. Some syntax variables affect input parsing. Assignments to
a variable overwrite any existing value. For LLM variables arguments, the final
value of the variable is the value passed on to the LLM.
a variable overwrite any existing value. For LLM variables, the final value of
a variable is the value passed on to the LLM.
The following two subsections explain variables in more detail. For each
variable, the default value is given as the value.
@ -443,6 +401,52 @@ For each invocation, a prompt cache is generated. `mill.py` searches for a
matching prompt cache after parsing.
## Adding support for other languages
To add support for another language:
1. Create a new Python module named `mill_lang_<language_id>` where all
non-alphanumeric characters of `language_id` are replaced by underscores.
2. Implement a `parse` function similar to the one in `mill_lang_markdown.py`.
3. Add a docstring to the module. This docstring serves as the module's README.
4. Put your module anywhere on the Python path of `mill.py`.
5. When using the CLI interface, pass the `-l <language_id>` argument.
6. When using the CGI interface, pass the `language=<language_id>` query-string
parameter.
## Adding support for other LLMs
Adding support for another LLM is similar to adding support for another
language:
1. Create a new Python module named `mill_llm_<llm_id>` where all
non-alphanumeric characters of `llm_id` are replaced by underscores.
2. Implement a `generate` function similar to the one in
`mill_llm_llama_cpp.py`.
3. Add a docstring to the module. This docstring serves as the module's README.
4. Put your module anywhere on the Python path of `mill.py`.
5. When using the CLI interface, pass the `-e <llm_id>` argument.
6. When using the CGI interface, pass the `llm_engine=<llm_id>` query-string
parameter.
## Adding example documentation
It's possible to add example documentation for uses of specific combinations of
`language_id` and `llm_id`.
1. Create a new Python module named `mill_example_<language_id>_<llm_id>`.
2. Create a global `example` variable in it and give it a string value. This
value is printed in the README below the 'Features' list.
3. Create a global `runnable_example` variable in it and give it a string
value. This value is printed at the end of the README.
The `example` variable is pure documentation. On the other hand, the intent for
the `runnable_example` variable is to have some text that can be executed by
`mill.py`. It should turn the README into an executable document.
## Runnable example
```mill-llm

190
mill.py
View File

@ -15,180 +15,50 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>
"""
# README
`mill.py v1.1.1`
Markdown interface for [llama.cpp](//github.com/ggerganov/llama.cpp).
## Requirements
1. [Python 3.x](//python.org) (tested on `3.11`)
2. [llama.cpp](//github.com/ggerganov/llama.cpp) (tested on `b1860`)
Developed and tested on Linux. I believe it should also work on Windows or Mac.
## Features
1. Lets you interact with `llama.cpp` using Markdown
2. Enables you to use almost every `llama.cpp` option
3. Makes no assumptions about what model you want to use
4. Lets you change any option at any point in the document
5. Caches prompts automatically
6. Streams output
7. Runs in a CLI environment as well as a CGI environment
8. Reads input document from `stdin`, writes output document to `stdout`
9. Lets you add support for any other language or LLM through Python modules
## Example
Contents of `hello.md`:
## Variables
```mill-llm
--model
mixtral-8x7b-instruct-v0.1.Q5_0.gguf
```
```mill-llm
--ctx-size
0
```
```mill-llm
--keep
-1
```
```mill
message template
Me:
> [INST] [/INST]
Bot:
>
```
```mill
prompt indent
>
```
## Chat
```mill
prompt start
```
Me:
> [INST] Hello, how are you? [/INST]
Bot:
>
Command:
```bash
export MILL_LLAMACPP_MAIN=path/to/llama.cpp/main /path/to/mill_cli.py <hello.md
```
Output:
## Variables
```mill-llm
--model
mixtral-8x7b-instruct-v0.1.Q5_0.gguf
```
```mill-llm
--ctx-size
0
```
```mill-llm
--keep
-1
```
```mill
message template
Me:
> [INST] [/INST]
Bot:
>
```
```mill
prompt indent
>
```
## Chat
```mill
prompt start
```
Me:
> [INST] Hello, how are you? [/INST]
Bot:
> Hello! I'm just a computer program, so I don't have feelings, but I'm here to help you with any questions you have to the best of my ability. Is there something specific you would like to know or talk about?</s>
Me:
> [INST] [/INST]
Bot:
>
## Adding support for other languages
Markdown support is included. To add another language:
To add support for another language:
1. Create a new Python module named `mill_lang_language_id` where all
1. Create a new Python module named `mill_lang_<language_id>` where all
non-alphanumeric characters of `language_id` are replaced by underscores.
2. Implement a `parse` function similar to the one in `mill_lang_markdown.py`.
3. Put your module anywhere on the Python path of `mill.py`.
4. When using the CLI interface, pass the `-l language_id` argument.
5. When using the CGI interface, pass the `language=language_id` query-string
3. Add a docstring to the module. This docstring serves as the module's README.
4. Put your module anywhere on the Python path of `mill.py`.
5. When using the CLI interface, pass the `-l <language_id>` argument.
6. When using the CGI interface, pass the `language=<language_id>` query-string
parameter.
## Adding support for other LLMs
`llama.cpp` support is included. Adding support for LLMs is similar to adding
support for other languages:
Adding support for another LLM is similar to adding support for another
language:
1. Create a new Python module named `mill_llm_llm_id` where all
non-alphanumeric characters of the `llm_id` part are replaced by
underscores.
1. Create a new Python module named `mill_llm_<llm_id>` where all
non-alphanumeric characters of `llm_id` are replaced by underscores.
2. Implement a `generate` function similar to the one in
`mill_llm_llama_cpp.py`.
3. Put your module anywhere on the Python path of `mill.py`.
4. When using the CLI interface, pass the `-e llm_id` argument.
5. When using the CGI interface, pass the `llm_engine=llm_id` query-string
3. Add a docstring to the module. This docstring serves as the module's README.
4. Put your module anywhere on the Python path of `mill.py`.
5. When using the CLI interface, pass the `-e <llm_id>` argument.
6. When using the CGI interface, pass the `llm_engine=<llm_id>` query-string
parameter.
## Adding example documentation
It's possible to add example documentation for uses of specific combinations of
`language_id` and `llm_id`.
1. Create a new Python module named `mill_example_<language_id>_<llm_id>`.
2. Create a global `example` variable in it and give it a string value. This
value is printed in the README below the 'Features' list.
3. Create a global `runnable_example` variable in it and give it a string
value. This value is printed at the end of the README.
The `example` variable is pure documentation. On the other hand, the intent for
the `runnable_example` variable is to have some text that can be executed by
`mill.py`. It should turn the README into an executable document.
"""
import importlib, re

View File

@ -19,22 +19,16 @@
r"""
## CGI install + usage
1. Clone the Git repo or download these files:
1. `mill_cgi.py`
2. `mill.py`
3. `mill_lang_markdown.py`
4. `mill_llm_llama_cpp.py`
5. `mill_example_markdown_llama_cpp.py`
2. Put files 2-5 on the Python path of `mill_cgi.py`. Easy solution: put all
files in the same folder.
3. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
1. Clone the Git repo or download a release tarball and unpack it.
2. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
`llama.cpp/main` or your wrapper around it.
4. Start your CGI web server.
3. Start your CGI web server.
```bash
mkdir -pv public_html/cgi-bin
cp -v mill_cgi.py public_html/cgi-bin
cp -v mill.py public_html/cgi-bin
cp -v mill_readme.py public_html/cgi-bin
cp -v mill_lang_markdown.py public_html/cgi-bin
cp -v mill_llm_llama_cpp.py public_html/cgi-bin
cp -v mill_example_markdown_llama_cpp.py public_html/cgi-bin
@ -60,7 +54,7 @@ Use the `language` and `llm_engine` query-string parameters to select a
different language or LLM.
"""
import contextlib, io, mill, os, sys, urllib.parse
import contextlib, io, mill, mill_readme, os, sys, urllib.parse
if __name__ == '__main__':
@ -73,33 +67,17 @@ if __name__ == '__main__':
args = urllib.parse.parse_qs(os.environ.get('QUERY_STRING',''))
language = args.get('language', 'markdown')
language_mod = mill.load_module(f'lang_{language}')
llm_engine = args.get('llm_engine', 'llama.cpp')
llm_engine_mod = mill.load_module(f'llm_{llm_engine}')
if os.environ['REQUEST_METHOD'].upper() == 'GET':
print('Content-type: text/markdown')
print()
print(mill.__doc__.strip())
print()
print()
print(__doc__.strip())
print()
print()
print(language_mod.__doc__.strip())
print()
print()
print(llm_engine_mod.__doc__.strip())
try:
example = \
mill.load_module(f'example_{language}_{llm_engine}')
print()
print()
print(example.__doc__.strip())
except ModuleNotFoundError:
pass
mill_readme.print_readme(language, llm_engine)
exit(0)
language_mod = mill.load_module(f'lang_{language}')
llm_engine_mod = mill.load_module(f'llm_{llm_engine}')
input_lines = sys.stdin.buffer.read(
int(os.environ['CONTENT_LENGTH'])).decode()

View File

@ -19,17 +19,10 @@
r"""
## CLI install + usage
1. Clone the Git repo or download these files:
1. `mill_cli.py`
2. `mill.py`
3. `mill_lang_markdown.py`
4. `mill_llm_llama_cpp.py`
5. `mill_example_markdown_llama_cpp.py`
2. Put files 2-5 on the Python path of `mill_cli.py`. Easy solution: put all
files in the same folder.
3. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
1. Clone the Git repo or download a release tarball and unpack it.
2. Set the environment variable `MILL_LLAMACPP_MAIN` to the path of
`llama.cpp/main` or your wrapper around it.
4. Pipe your Markdown document to `mill_cli.py`.
3. Pipe your Markdown document to `mill_cli.py`.
```bash
export MILL_LLAMACPP_MAIN=/path/to/llama.cpp/main
@ -51,7 +44,7 @@ Use the command-line arguments to select a different language or LLM. You can
use `-h` for a usage description.
"""
import argparse, mill, sys
import argparse, mill, mill_readme, sys
if __name__ == '__main__':
@ -72,24 +65,7 @@ if __name__ == '__main__':
input_lines = sys.stdin.readlines()
if args.readme or not ''.join([line.strip() for line in input_lines]):
print(mill.__doc__.strip())
print()
print()
print(__doc__.strip())
print()
print()
print(language.__doc__.strip())
print()
print()
print(llm_engine.__doc__.strip())
try:
example = \
mill.load_module(f'example_{args.language}_{args.llm_engine}')
print()
print()
print(example.__doc__.strip())
except ModuleNotFoundError:
pass
mill_readme.print_readme(args.language, args.llm_engine)
exit(0)
exit_code = mill.main(language, llm_engine, input_lines)

View File

@ -14,7 +14,128 @@
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>
example = """
## Example
Contents of `hello.md`:
## Variables
```mill-llm
--model
mixtral-8x7b-instruct-v0.1.Q5_0.gguf
```
```mill-llm
--ctx-size
0
```
```mill-llm
--keep
-1
```
```mill
message template
Me:
> [INST] [/INST]
Bot:
>
```
```mill
prompt indent
>
```
## Chat
```mill
prompt start
```
Me:
> [INST] Hello, how are you? [/INST]
Bot:
>
Command:
```bash
export MILL_LLAMACPP_MAIN=path/to/llama.cpp/main /path/to/mill_cli.py <hello.md
```
Output:
## Variables
```mill-llm
--model
mixtral-8x7b-instruct-v0.1.Q5_0.gguf
```
```mill-llm
--ctx-size
0
```
```mill-llm
--keep
-1
```
```mill
message template
Me:
> [INST] [/INST]
Bot:
>
```
```mill
prompt indent
>
```
## Chat
```mill
prompt start
```
Me:
> [INST] Hello, how are you? [/INST]
Bot:
> Hello! I'm just a computer program, so I don't have feelings, but I'm here to help you with any questions you have to the best of my ability. Is there something specific you would like to know or talk about?</s>
Me:
> [INST] [/INST]
Bot:
>
"""
runnable_example = """
## Runnable example
```mill-llm

View File

@ -46,8 +46,8 @@ until it is assigned to again.
`mill.py` parses the text in a single pass from top-to-bottom and then calls
the LLM at the end. Some syntax variables affect input parsing. Assignments to
a variable overwrite any existing value. For LLM variables arguments, the final
value of the variable is the value passed on to the LLM.
a variable overwrite any existing value. For LLM variables, the final value of
a variable is the value passed on to the LLM.
The following two subsections explain variables in more detail. For each
variable, the default value is given as the value.

View File

@ -15,29 +15,71 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>
"""
Generate README for all built-in `mill.py` modules.
# README
`mill.py v1.1.2`
Markdown interface for [llama.cpp](//github.com/ggerganov/llama.cpp).
## Requirements
1. [Python 3.x](//python.org) (tested on `3.11`)
2. [llama.cpp](//github.com/ggerganov/llama.cpp) (tested on `b1860`)
Developed and tested on Linux. I believe it could also work on Windows or Mac.
## Features
1. Lets you interact with `llama.cpp` using Markdown
2. Enables you to use almost every `llama.cpp` option
3. Makes no assumptions about what model you want to use
4. Lets you change any option at any point in the document
5. Caches prompts automatically
6. Streams output
7. Runs in a CLI environment as well as a CGI environment
8. Reads input document from `stdin`, writes output document to `stdout`
9. Lets you add support for any other language or LLM through Python modules
"""
import mill
import mill_cli, mill_cgi
import mill_lang_markdown, mill_llm_llama_cpp
import mill_example_markdown_llama_cpp
import mill, mill_cgi, mill_cli
if __name__ == "__main__":
print(mill.__doc__.strip())
def print_readme(language, llm_engine):
language_mod = mill.load_module(f'lang_{language}')
llm_engine_mod = mill.load_module(f'llm_{llm_engine}')
try:
example_mod = mill.load_module(f'example_{language}_{llm_engine}')
except ModuleNotFoundError:
example_mod = None
print(__doc__.strip())
print()
print()
if example_mod:
print(example_mod.example.strip())
print()
print()
print(mill_cli.__doc__.strip())
print()
print()
print(mill_cgi.__doc__.strip())
print()
print()
print(mill_lang_markdown.__doc__.strip())
print(language_mod.__doc__.strip())
print()
print()
print(mill_llm_llama_cpp.__doc__.strip())
print(llm_engine_mod.__doc__.strip())
print()
print()
print(mill_example_markdown_llama_cpp.__doc__.strip())
print(mill.__doc__.strip())
if example_mod:
print()
print()
print(example_mod.runnable_example.strip())
if __name__ == "__main__":
print_readme('markdown', 'llama.cpp')