Three Ways of Using Large Language Models to Evaluate Chat
This paper describes the systems submitted by team6 for ChatEval, the DSTC 11 Track 4 competition.
Tags:Paper and LLMsChatbotPricing Type
- Pricing Type: Free
- Price Range Start($):
GitHub Link
The GitHub link is https://github.com/oplatek/chateval-llm
Introduce
Title GitHub – oplatek/chateval-llm Enhancing Chat Evaluation Using Large Language Models
Summary The GitHub repository “oplatek/chateval-llm” presents a system description of the Dstc11 Track 4 submission, focusing on three different approaches to leveraging Large Language Models for improved chat evaluation. The repository provides valuable insights into utilizing these methods to enhance the assessment of conversational agents.
Three Ways of Using Large Language Models to Evaluate Chat. A system description of Dstc11 Track 4 submission.
Content
A system description of Dstc11 Track 4 submission.

Related

At Bodt.io, we offer intuitive, no-code solutions to create conversational AI chatbots. Our platform allows you to build personalized chatbots trained on your website content, enhancing customer interaction and lead generation. Explore our affordable plans today

Abacus.AI is the world's first end-to-end AI platform that enables real-time deep learning at scale for common enterprise use cases. With our state-of-the-art MLOps platform, you can bring your own models, or use our neural network techniques to create highly-accurate models, and operationalize them across a wide array of use cases including forecasting, personalization, vision, anomaly detection and NLP.













