⏱️ 09/15 (Fri.) 14:00-14:30 at R0 - International Conference Hall
With the rise of large language models, many companies and organizations have attempted to leverage this powerful tool to manage their proprietary data. At Academia Sinica, we are also focused on developing a useful LLM service system, SinicaWisdom, to facilitate administrative work. Generally, large language models provide responses based on the information they have encountered in the training data. However, when faced with questions they cannot answer, large language models tend to provide incorrect responses rather than admitting they don't know. This behavior necessitates a significant amount of effort in terms of re-inspection or re-examination.
Furthermore, to restrict the information used by large language models for proprietary data, an information retrieval system is typically employed as the front-end system to initially gather documents relevant to the question. This practice limits the knowledge that LLMs can access and exacerbates this issue. In this presentation, we will discuss how we have enabled SinicaWisdom to "acknowledge what it does not know" and provide additional helpful information instead of simply stating "I cannot answer."
😊 Share this page to friends:
😊 Share this page to friends:
😊 Share this page to friends: