[P] 3Blue1Brown Follow-up: From Hypothetical Examp...

About a year ago, I watched [this](https://www.youtube.com/watch?v=eMlx5fFNoYc&t=367s) 3Blue1Brown LLM tutorial on how a model’s self-attention mechanism is used to predict the next token in a sequence, and I was surprised by how little we know about what actually happens when processing the sentence "A fluffy blue creature roamed the verdant forest." A year later, the field of mechanistic interpretability has seen significant advancements, and we're now able to "decompose" models into interpretable circuits that help explain how LLMs produce predictions. Using the second iteration of an LLM "debugger" I've been working on, I compare the hypothetical representations used in the tutorial to the actual representations I see when extracting a circuit that describes the processing of this specific sentence. If you're into model interpretability, please take a look! [https://peterlai.github.io/gpt-circuits/](https://peterlai.github.io/gpt-circuits/)

[P] 3Blue1Brown Follow-up: From Hypothetical Examp...

By ptarlye in
Customer Service
209 170

Contact for Price

Secure payment. Instant access.

Creator Information

p
ptarlye

Verified Creator

View Profile

Tags

MachineLearning
AI
Reddit
GPT

You Might Also Like