Assume the same scenario as in Question 1. What is the average precision of both systems?
Question 3 Assume the same scenario as in Question 1. What is the average precision of both systems? 1 point AP(A) = 1/20 AP(B) = 1/5 AP(A) =…
Question 3 Assume the same scenario as in Question 1. What is the average precision of both systems? 1 point AP(A) = 1/20 AP(B) = 1/5 AP(A) =…
Question 2 Assume the same scenario as in Question 1. What is the recall of both systems? 1 point R(A) = 1/40 R(B)= 2/40 R(A) = 9/10 R(B)=…
1. Question 1 Suppose a query has a total of 4 relevant documents in the collection. System A and System B have each retrieved 10 documents, and the relevance status…
7. Question 7 What can you say about the precision-recall (PR) curve? 1 point It is always monotonically increasing. The ideal system should have the PR curve as…
6. Question 6 For any ranked list of search results, precision at 10 documents is always higher than precision at 20 documents. 1 point False True
5. Question 5 Can a retrieval system have an F1 score of 0.75 and a precision of 0.5? 1 point No Yes
10. Question 10 Which of following is wrong about nDCG@k? 1 point It can be used to compare across queries. It discounts only top ranked documents. It…
9. Question 9 Which of the following is NOT true about Cranfield evaluation methodology? 1 point It simulates real document collections. It simulates user queries. It does…
8. Question 8 Which is correct about average precision? 1 point It combines precision and recall. It does not show the difference between ranks of relevant documents.
4. Question 4 Assume you have two retrieval systems X and Y. For a specific query, system X has a higher precision at 10 documents compared to Y. Can system…