>"In the in-context learning (ICL) paradigm, given a set of examples, the model has to learn the mapping from inputs to outputs. Prior research has demonstrated that LLMs implicitly compress this mapping into a latent activation, called the task vector"
Related:
In-Context Learning Creates Task Vectors:
https://arxiv.org/abs/2310.15916
Function Vectors in Large Language Models:
https://functions.baulab.info/
https://x.com/graceluo_/status/1852043048043319360