Ranking Metrics

These functions are helpful in evaluation of ranking problem like as in case of Recommendation Systems, where you need to rank your recommendations based on the predicted values.

Metrics.avg_precisionFunction
avg_precision(y_rec, y_rel, k = 10)

Evaluates how much of the relevant documents are concentrated in the highest ranked predictions.

Calculated as ∑(Recall@i - Recall@i-1)* Precision@i for i = (1, 2, 3....k)

Here, y_rec are predicted probabilities for recommendation and y_rel defines as 1 if particular result is relevant, else 0. The shape of y_rec and y_rel are expected to be (1, N_elements)

source
Metrics.ranking_stats_kFunction
ranking_stats_k(y_rec, y_rel, k = 10)

Evaluates the relevancy of top k recommendations using precison@k, recall@k and f1_score@k. Returns result as a Dict.

Here, y_rec are predicted probabilities for recommendation and y_rel defines as 1 if particular result is relevant, else 0. The shape of y_rec and y_rel are expected to be (1, N_elements).<br>

  • precison_k is evaluated as Recommended_items_that_are_relevant / Total_Recommended_items.
  • recall_l is evaluated as Recommended_items_that_are_relevant / Total_Relevant_items.
  • f1_k is evaluated as 2 * Recommended_items_that_are_relevant / (Total_Recommended_items + Total_Relevant_items).
source