Inspired by Matt Webb's Multiplayer AI posts. Can an LLM figure out in a conversation with multiple bots and humans, when to respond and when to let the others speak.
Inspired by discussions on how does one inspect what is inside an LLM, could you treat it like the internet? If every bit of information inside the LLM could be referenced through a prompt, then maybe that maybe you could browse the LLM if every prompt is a URL.
Could you create a fairly complex UX through vibe-coding. After several days of continuous prompting, it seems that this current crop of LLMS do not do well with UX patterns. It has a tendency to do the MVP of design, or even some designs borderline unusable, like it was developed by an engineer. A lot of work is needed to make the design usable.
Could you prompt an LLM through poetry to generate code? Could code be represented as poetry? Turns out poetry itself has many styles too, so we took a shortcut to find poets to emulate, and the results are quite beautiful.