search
Search People
Add Kontxt, then visit site.
logo
arxiv.org

[2509.19371] How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models

local_offer
#LargeLanguageModels #ArtificialIntelligence #KnowledgeInfusion

Highlights

Filter
Share

Loading...

Comments

Kontxt Kontxt @kontxt The article discusses the challenges and solutions in injecting domain-specific knowledge into large language models (LLMs) to enhance their performance on specialized benchmarks. It highlights the critical balance needed in knowledge infusion to avoid 'memory collapse' due to over-infusion. The authors present a knowledge infusion scaling law that predicts the optimal amount of domain knowledge to be integrated during pre-training, supported by experiments showing consistent results across different model sizes.
LikeยทShareยทReplyยทOct 5th, 2025
Write a comment...
'Enter' to post. 'Shift-Enter' new line.
AI