concept confusion matrix in category machine learning

This is an excerpt from Manning's book Machine Learning with R, the tidyverse, and mlr.
To get a better idea of which groups are being correctly classified and which are being misclassified, we can construct a confusion matrix. A confusion matrix is simply a tabular representation of the true and predicted class of each case in the test set.
With mlr, we can calculate the confusion matrix using the calculateConfusionMatrix() function. The first argument is the $pred component of our holdoutCV object, which contains the true and predicted classes of the test set. The optional argument relative asks the function to show the proportion of each class in the true and predicted class labels:

This is an excerpt from Manning's book Real-World Machine Learning.
In many classification problems, it’s useful to go beyond the simple counting accuracy and look at this class-wise accuracy, or class confusion. It turns out to be useful to display these four numbers in a two-by-two diagram called a confusion matrix, shown in figure 4.13.

This is an excerpt from Manning's book Introducing Data Science: Big data, machine learning, and more, using Python tools.
Now we can use the prediction and compare it to the real thing using a confusion matrix.
A confusion matrix is a matrix showing how wrongly (or correctly) a model predicted, how much it got “confused.” In its simplest form it will be a 2x2 table for models that try to classify observations as being A or B. Let’s say we have a classification model that predicts whether somebody will buy our newest product: deep-fried cherry pudding. We can either predict: “Yes, this person will buy” or “No, this customer won’t buy.” Once we make our prediction for 100 people we can compare this to their actual behavior, showing us how many times we got it right. An example is shown in table 3.1.