Pretrained model
Pretraining is the process of producing a general purpose, flexible model from a massive corpus of general purpose data. Modern machine learning training (especially in language processing) usually has two training phases: pretraining - where the model is taught to understand general language, logic, and conceptual features, and, finetuning - where the model is taught to understand concepts or language specific to a domain, for example, finance, construction, or scientific data (see finetuning)
Related Articles
No items found.