[446 / 48 / 108]
Quoted By: >>92479513 >>92482414 >>92483629 >>92485159 >>92486151
►Previous Threads:
Previous Thread: >>92473453
The Previous Thread Before the Last Thread: >>92468569
The Previous Thread Before That: >>92457644
►FAQ:
>Wiki
Might be coming soon?
>Main FAQ
https://rentry.org/er2qd
>Helpful LLM Links
https://rentry.org/localmodelslinks
►Guides & Resources:
>LLaMA Guide/Resources
https://www.reddit.com/r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ (Gen.Guide|TEMP)
https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ (TEMP)
https://github.com/emptycrown/llama-hub (Llama Hub)
https://blog.lastmileai.dev/using-openais-retrieval-plugin-with-llama-d2e0b6732f14 (ChatGPT Plugins 4 LlaMA)
>Alpaca Guide/Resources
https://github.com/antimatter15/alpaca.cpp (CPU)
https://github.com/tloen/alpaca-lora (LoRA 4 GPU's)
https://huggingface.co/chavinlo/alpaca-13b/tree/main (New Native Model)
>GPT4ALL Guide/Resources
https://github.com/ggerganov/llama.cpp#using-gpt4all (General Guide)
>Chinese ChatGPT Guide/Resources
https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md (General Guide)
https://github.com/THUDM/GLM-130B/blob/main/docs/quantization.md (Quantization)
>Pyggy Guide/Resources (w/ Kobold/Tavern guide)
https://rentry.org/Pyggymancy (Windows)
https://rentry.org/pygmalion-local (Linux)
>Local Models & Old Papers
https://rentry.org/localmodelsoldpapers
►Hardware Inferences Guides:
>CPU Inferences
https://github.com/ggerganov/llama.cpp
>GPU Inferences
https://github.com/oobabooga/text-generation-webui
https://github.com/wawawario2/text-generation-webui (w/ Long Term Memory)
►Other Resources:
>Model Torrents
https://rentry.org/nur779
>Miku .json Pastebins
https://pastebin.com/5WVd28Um
>GPTQ for LLaMA
https://github.com/qwopqwop200/GPTQ-for-LLaMa
>RolePlayBoT for RP (Smut story writing version coming soon, Stupid Sexy Llama)
https://rentry.org/RPBT
>Original Miku.sh
https://pastebin.com/vWKhETWS
>/lmg/ OP Thread Template
https://rentry.org/LMG-thread-template
Previous Thread: >>92473453
The Previous Thread Before the Last Thread: >>92468569
The Previous Thread Before That: >>92457644
►FAQ:
>Wiki
Might be coming soon?
>Main FAQ
https://rentry.org/er2qd
>Helpful LLM Links
https://rentry.org/localmodelslinks
►Guides & Resources:
>LLaMA Guide/Resources
https://www.reddit.com/r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ (Gen.Guide|TEMP)
https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ (TEMP)
https://github.com/emptycrown/llama-hub (Llama Hub)
https://blog.lastmileai.dev/using-openais-retrieval-plugin-with-llama-d2e0b6732f14 (ChatGPT Plugins 4 LlaMA)
>Alpaca Guide/Resources
https://github.com/antimatter15/alpaca.cpp (CPU)
https://github.com/tloen/alpaca-lora (LoRA 4 GPU's)
https://huggingface.co/chavinlo/alpaca-13b/tree/main (New Native Model)
>GPT4ALL Guide/Resources
https://github.com/ggerganov/llama.cpp#using-gpt4all (General Guide)
>Chinese ChatGPT Guide/Resources
https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md (General Guide)
https://github.com/THUDM/GLM-130B/blob/main/docs/quantization.md (Quantization)
>Pyggy Guide/Resources (w/ Kobold/Tavern guide)
https://rentry.org/Pyggymancy (Windows)
https://rentry.org/pygmalion-local (Linux)
>Local Models & Old Papers
https://rentry.org/localmodelsoldpapers
►Hardware Inferences Guides:
>CPU Inferences
https://github.com/ggerganov/llama.cpp
>GPU Inferences
https://github.com/oobabooga/text-generation-webui
https://github.com/wawawario2/text-generation-webui (w/ Long Term Memory)
►Other Resources:
>Model Torrents
https://rentry.org/nur779
>Miku .json Pastebins
https://pastebin.com/5WVd28Um
>GPTQ for LLaMA
https://github.com/qwopqwop200/GPTQ-for-LLaMa
>RolePlayBoT for RP (Smut story writing version coming soon, Stupid Sexy Llama)
https://rentry.org/RPBT
>Original Miku.sh
https://pastebin.com/vWKhETWS
>/lmg/ OP Thread Template
https://rentry.org/LMG-thread-template