Harnessing Hallucinations - Guiding LLMs to Success
Large Language Models (LLMs) constantly hallucinate, sometimes producing accurate and profound outputs, but often going awry. For most, building on top of existing large scale LLMs makes the most sense. Builders will need to tease out high quality & reliable results from these models through prompt engineering & creative orchestration.
The LLM stack is slowly emerging
The LLM stack is slowly emerging
Python vs JS
Langchain or LlamaIndex (which wraps Langchain and then some)
Pinecone vs Chroma vs 20 other Vectorstores
OpenAI vs Cohere vs Anthropic vs Llama vs StableLM vs Dolly2 vs other LLMs
Wondering where to start?
If you learn by DOING them:
Gregory Kamradt and the good folks at Data Independen...
Alpaca did LLM drops right
#Alpaca did #LLM drops right
✔️ Abstract gives you simple direct reason to care (davinci-003 locally @ <$600)
✔️ “Paper” is a webpage that can be read in under 5 mins.
✔️ Web Demo for you to see (full disclosure has not been working recently)
✔️ #Github for you to replicate (assuming you have access to LLaMA)
Result: 18k⭐s in 3 weeks
Impromptu - Reid Hoffman - Review
Impromptu written by Reid Hoffman & GPT4 makes for an interesting read.
Deceiving with Data
Groening is a hard to catch, insidious means of telling misleading stories with Data
Waiting for Her
Explores the feasibility of Samantha from Her (2013) and dives deeper into AI in the field of psychotherapy.
Of Cabbages and Strings
Cabbages and cauliflowers were the result of enterprising individuals experimenting with a wild ancestor. Generative AI today, similarly, has accessible, low cost primitives, waiting to be experimented with.
Where are the New Computer Overlords?
Using a cricket example to examine Google's continued dominance post the advent of IBM Watson and why LLMs might succeed where IBM Watson failed.
50 post articles, 7 pages.