Architecture
Transformer
A neural network architecture introduced by Google in 2017 that uses self-attention to process sequences in parallel — the foundation of modern LLMs like GPT and Claude.
Architecture
A neural network architecture introduced by Google in 2017 that uses self-attention to process sequences in parallel — the foundation of modern LLMs like GPT and Claude.
We use cookies
Anonymous analytics help us improve the site. You can opt out anytime. Learn more