Positional encodings in LLMs
- đ¤ Speaker: Valeria Ruscio
- đ Date & Time: Thursday 04 June 2026, 17:00 - 17:45
- đ Venue: Lecture Theatre 2, Computer Laboratory, William Gates Building
Abstract
Positional encodings are essential for transformer-based language models to understand sequence order, yet their influence extends far beyond simple position tracking. This talk explores the landscape of positional encoding methods in LLMs and reveals surprising insights about how these architectural choices shape model behavior.
We begin with the fundamental challenge: why attention mechanisms require explicit positional information. We then survey the evolution of encoding strategies, from sinusoidal approaches to modern techniques like RoPE, examining their architectural implications and trade-offs.
The talk delves into how these different encoding strategies fundamentally shape model architectures and representations. We analyze the specific limitations and trade-offs of each approach, examining how positional information propagates through transformer layers and influences the learned representations.
Series This talk is part of the Foundation AI series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Lecture Theatre 2, Computer Laboratory, William Gates Building
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 04 June 2026, 17:00-17:45