このプロジェクトについて
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
Leveraging LLMs for Unsupervised Dense Retriever Ranking
Selecting which Dense Retriever to use for Zero-Shot Search
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Fusion in information retrieval
SIGIR 2024
Zerox
ICTIR 2024
CrateのPublish
Python Recommentation
Embedding Models
コミュニケーション時に気をつけること
NTCIREVAL
Webパフォーマンスチューニング
Prompt Engineering Guide
Self-Consistency Improves Chain of Thought Reasoning in Language Models ICLR23
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? (EMNLP 22)
大規模言語モデルを用いたマイソクPDFからの情報抽出