concept continuous variable in category R

This is an excerpt from Manning's book Machine Learning with R, the tidyverse, and mlr.
Different variables might be measured using different types of scales, meaning we need to handle them differently. Throughout the book, I mention continuous variables, categorical variables, and sometimes logical variables.
Continuous variables represent some measurement on a numeric continuum. For example, the length of a hippo’s tusk would be represented as a continuous variable. We can apply mathematical transformations to continuous variables. In R, continuous variables are most commonly represented as integers or as doubles. An integer variable can only have whole numbers, whereas a double can also include non-zero digits after a decimal point. In the data shown in table A.1, the TuskLength variable is numeric.
You’re going to find this chapter a breeze. This is because you’ve done everything in it before (sort of). In chapter 3, I introduced you to the k-nearest neighbors (kNN) algorithm as a tool for classification. In chapter 7, I introduced you to decision trees and then expanded on this in chapter 8 to cover random forest and XGBoost for classification. Well, conveniently, these algorithms can also be used to predict continuous variables. So in this chapter, I’ll help you extend these skills to solve regression problems.
By the end of this chapter, I hope you’ll understand how kNN and tree-based algorithms can be extended to predict continuous variables. As you learned in chapter 7, decision trees suffer from a tendency to overfit their training data and so are often vastly improved by using ensemble techniques. Therefore, in this chapter, you’ll train a random forest model and an XGBoost model, and benchmark their performance against the kNN algorithm.
Figure 12.3. How the kNN algorithm predicts continuous variables. The crosses represent new data points for which we wish to predict the journey length. For the one-, three-, and five-nearest neighbor models, the nearest neighbors to each new data point are highlighted in a lighter shade. In each case, the predicted value is the mean journey length of the nearest neighbors.
![]()