Description
Evaluates an LLM's effectiveness as a team member in a business environment by assessing its ability to provide accurate and contextually relevant responses. It utilizes diverse queries covering both technical (such as coding) and non-technical areas.
Provider
Prosus
Language
English
Evaluation
Auto-evaluation with GPT-4o over ground-truth.
Data Statistics
Number of Samples275
Collection PeriodFebruary 2022 - October 2023
Tags
Tags that describe the type of question asked.
Complexity
The complexity level of the questions.

Results based on 0 entries.

Last updated: Invalid Date

#
Model
Provider
Size
Acceptance
No results.

Rows per page

Page 1 of 0

Have a unique use-case you’d like to test?

We want to evaluate how LLMs perform on your specific, real world task. You might discover that a small, open-source model delivers the performance you need at a better cost than proprietary models. We can also add custom filters, enhancing your insights into LLM capabilities. Each time a new model is released, we'll provide you with updated performance results.

Leaderboard

An open-source model beating GPT-4 Turbo on our interactive leaderboard.

Don’t worry, we’ll never spam you.

Please, briefly describe your use case and motivation. We’ll get back to you with details on how we can add your benchmark.