1/5 | 8 | ||||||||||||||||||||
7 | |||||||||||||||||||||
6 | 2 | ||||||||||||||||||||
3 | |||||||||||||||||||||
4 | |||||||||||||||||||||
1. A large set of texts used to train a model to understand and generate language.
2. The process of breaking text into smaller pieces, called tokens, which can be words or subwords.
3. A modeling error that occurs when a model learns the training data too well, failing to generalize to new data.
4. The internal variables of a model that are adjusted during training to minimize prediction error.
5. An architecture that uses self-attention mechanisms to process and generate sequences of data.
6. A method of further training a pre-trained model on a specific dataset to improve performance on a particular task.
7. Where a text is represented as an unordered collection of words.
8. The process of interpreting the meaning of words and phrases in context.