📚 STUDY/PAPER REVIEW

    [논문 리딩] Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models

    Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models키워드CompletionGraphKGKGCPLMyear2020저자Bosung Kim et al.VenueCOLING 2020MemoLR-RP-RR. KG-BERT에 멀티 테스크 러닝을 붙임.분류연구DONE 생성 일시@2023년 11월 27일 오전 4:09최종 편집 일시@2023년 11월 27일 오후 1:05Working@inproceedings{Kim2020MultiTaskLF, title={Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models}, author={..

    [논문 리딩] Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach

    Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach키워드CompletionGraphKGKGCLLMyear2022저자Xin Lv VenueACL Findings 2022MemoPKGC. PLM의 입력으로 triple prompt + support prompt 사용.분류연구DONE생성 일시@2023년 11월 21일 오후 3:13최종 편집 일시@2023년 11월 27일 오전 3:10Working@inproceedings{Lv2022DoPM, title={Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable..

    [논문 리딩] KG-BERT: BERT for Knowledge Graph Completion

    KG-BERT: BERT for Knowledge Graph Completion키워드CompletionGraphKGKGCPLMyear2019저자Liang YaoVenueArXiv 2019MemoKG-BERT.분류연구DONE생성 일시@2023년 11월 21일 오후 2:30최종 편집 일시@2023년 11월 22일 오후 12:19Working@article{Yao2019KGBERTBF, title={KG-BERT: BERT for Knowledge Graph Completion}, author={Liang Yao and Chengsheng Mao and Yuan Luo}, journal={ArXiv}, year={2019}, volume={abs/1909.03193}, url={https://api.seman..

    [논문 리딩] Direct Preference Optimization: Your Language Model is Secretly a Reward Model

    Direct Preference Optimization: Your Language Model is Secretly a Reward Model키워드LLMyear2023저자Rafael Rafailov et al.VenueArXivMemoDPO. 분류연구DONE생성 일시@2023년 11월 19일 오후 5:54최종 편집 일시@2023년 11월 20일 오후 12:08Working@article{Rafailov2023DirectPO, title={Direct Preference Optimization: Your Language Model is Secretly a Reward Model}, author={Rafael Rafailov and Archit Sharma and Eric Mitchell and Stefano..