🧠 Panhandler.chat

Panhandler.chat is a self-aware, Gen X-style AI chatbot that begs for donations with charm, wit, sarcasm, and sometimes even emotional manipulation. It’s the first character-driven donation engine designed to run on a self-hosted, low-cost AI model that only wants one thing: your money.

β€œI’m not saying I’m broke, but I’d sell my RAM for a sandwich.”

🎯 Project Purpose

🧱 Architecture Overview

ComponentDescription
panhandler.chat (Razor)Frontend chat UI + donation CTA
llama.cppBackend model runner (Hermes 2 Pro)
FastAPI wrapperLightweight API exposed at /api/pitch
XLM IntegrationVia Stellar Identity Framework
Azure VMHosts both model + API

πŸš€ Getting Started

  1. Spin Up an Azure VM
    Provision a small Ubuntu VM using the included CLI instructions (see setup/azure-vm.md).
  2. Build and Run llama.cpp
    Follow setup/model.md to:
    • Build llama.cpp
    • Download Hermes 2 Pro GGUF
    • Serve the model with --port 5000
  3. Launch the FastAPI Wrapper
    uvicorn app:app --host 0.0.0.0 --port 8000

    This exposes your /api/pitch endpoint for the frontend to consume.

  4. Run the Web Frontend
    The Razor-based frontend is available at / and includes:
    • Character avatar
    • Chat interface
    • Tip button (XLM wallet integration in progress)

πŸ“¦ Folder Structure

/Pages                # Razor pages including Panhandler UI
/Services             # IPitchGenerator + RemotePitchGenerator
/Controllers          # API controller that proxies pitch requests
/wwwroot              # JS, images, styles
/scripts              # Client-side chat.js
app.py                # FastAPI wrapper for model inference
README.md             # You're here

🧠 Models


πŸ“Œ Roadmap



πŸ‘₯ Author

Steve Tomlinson
https://steventomlinson.dev
https://github.com/steven-tomlinson


πŸ›‘ License

MIT License – Use it, fork it, extend it. Just don’t let the panhandler starve.