Ollama
https://ollama.com/public/ollama.png
https://ollama.com
インストール
% brew install --cask ollama
https://formulae.brew.sh/cask/ollama
実行 (初回だけダウンロード)
% ollama run gemma2:2b
% ollama run codellama
Llama
https://github.com/ollama/ollama
Note
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Ollama Python Library
https://github.com/ollama/ollama-python
Open WebUI (Formerly Ollama WebUI)
https://github.com/open-webui/open-webui
Ollamaで体験する国産LLM入門
https://zenn.dev/hellorusk/books/e56548029b391f
#ai #localLLM #python #2024