BATCHER: Cost-Effective LLM Inference for Entity Resolution
Published:
BATCHER: Cost-Effective LLM Inference for Entity Resolution
2023 - 2024
Tech Stack: In-Context Learning, Prompt Engineering, Cost Optimization, Batch Prompting
- Introduced a batch prompting framework with demonstration selection and question batching for entity resolution.
- Proposed a covering-based demonstration selection strategy, achieving 4x-7x cost savings over standard prompting while maintaining strong matching quality.
