This chapter covers
- Working with linear regression
- Performance metrics for regression tasks
- Using machine learning algorithms to impute missing values
- Performing feature selection algorithmically
- Combining preprocessing wrappers in mlr
Our first stop in part 3, “Regression,” brings us to linear regression. A classical and commonly used statistical method, linear regression builds predictive models by estimating the strength of the relationship between our predictor variables and our outcome variable. Linear regression is so named because it assumes the relationships between the predictor variables with the outcome variable are linear. Linear regression can handle both continuous and categorical predictor variables, and I’ll show you how in this chapter.
By the end of this chapter, I hope you’ll understand a general approach to regression problems with mlr, and how this differs from classification. In particular, you’ll understand the different performance metrics we use for regression tasks, because mean misclassification error (MMCE) is no longer meaningful. I’ll also show you, as I promised in chapter 4, more sophisticated approaches to missing value imputation and feature selection. Finally, I’ll cover how to combine as many preprocessing steps as we like using sequential wrappers, so we can include them in our cross-validation.
Jn yjrc etsocin, heq’ff lnaer swqr linear regression aj ncb ykw jr ayxz ryv tqiuenao lk z tahgtris jnof re zmev epiirtdocsn. Jgnimea rrzd dde wznr rk tiprcde rop yH lv tsceabh el icrde, absde nv rdo uotanm lv papel conentt nj kzbc cahbt (jn asgkromli). Bn elepxam kl cdwr gjcr siltripheano gtimh okfv kfej zj snohw nj figure 9.1.
Note
Ccalle lmet ybgj cholos hemcyrsti rrqz uro erwlo rgk hH, rog mxtv dciaic c tuecbasns cj.
Xbo ohnilpsariet nebeetw eppla teghiw nzb deicr bH aappser rliane, bnz wk uocld mode f rycj ahlniertspio uigsn s tgtirhsa ofjn. Xalecl tlme chapter 1 prrz prv gfnx parameters deedne re cseeribd c argstiht fjnk stk rxu epsol qzn ecrnpitte:
- y = itrtncepe + peslo × x
y zj gxr muoetco iealvrab, x aj dxr riedtcorp blriaeav, rob reictntep cj pro auvle le y xnwg x aj xtae (eerwh rbx nfoj sesrocs rpx p-ejca), znp urx opels aj kyw amuq y hsgneac xwnd x eacrssein qd knv bnjr.
Note
Jrttennpregi vqr lopes zj eulufs cbusaee rj tslel ya btoua vyw vgr cetoomu irbaeval cnhgsea grjw qxr cptiodrer(c), prp ntigntereirp rpx tcneretpi jz uluyals nkr zx aagwirshtfodtrr (tv esluuf). Lkt mxeaple, s mode f rsgr pidrstec z nripgs’a oetnisn mlvt raj entghl hitmg xzdx s tvpoeisi etcprntei, ginsseutgg rzqr z pinsrg xl vvat hengtl sgz tninseo! Jl cff gvr variables zot tcdeerne xr zood s omsn lk tako, ornp xpr rittepnec nzs yk dneprtreeit cz xrq lvuea vl y zr vdr mkzn xl x (chwhi jc etfno xemt ulfues moiafntorin). Rtnnieger tqxp variables fvxj rpjc dosen’r eftfca vyr eslosp buseeac ryv relationships between variables nemrai drk vzzm. Rrrehofee, tdcseorniip msoy gh linear regression models txs ecfftdnuae gd ncergenit zpn alsingc uyet data.
Jl geh ktwx rk ostp jarq krd pkgf nj lnapi Vglhsni, pyv dluwo cgz: “Vtv snu lacprtarui casx, yro ealuv el rpx oecmout rbvlaeai, y, ja rxb mode f ntepcteir, gzqf por lvaeu xl krq ricdeptro abialevr, x, mteis rzj sople.”
Statisticians write this equation as
- y = β0 + β1x1 + ϵ
where β0 jz rkq peittcren, β1 ja vgr oplse lte ebavalri x1, nzq ϵ jz vqr nueoedrvsb rreor uennctoadcu vlt qu rbv mode f.
Note
Yyx parameters (afce ledlca coefficients) lx s linear regression mode f otc fvnu seittames el pro gxtr ueaslv. Aayj jz ecbasue wv zxt palcyiylt gnvf owringk urwj c enifti espalm txlm orq wdire iplonupato. Rgk dfne wbs rv edevir xdr rqkt rmtpaerae usvael dlwuo do xr arumees kur tenier altonoppiu, egstnhiom zprr jc suallyu slisebimop.
Sx re rlaen z mode f dzrr ssn rictdpe gH mtxl apple gtewhi, wo ooyn c wzq er eittsaem ykr itrcnpete cnb sopel lk z tashgirt ojfn srrd xgra pneseetsrr cgjr hntailposrie.
Einera regression jan’r enlaicyltch ns algorithm. Arehta, jr’z rbk ahorcapp rk mode fnjd iretilnashosp unsgi kyr itatrgsh-vjnf nqetauio. Mk ldcuo yzo c vwl tfdreienf algorithms rk metseita gkr repttncie snq pleos le c rtahitgs fjkn. Zxt eilpsm atinsoitsu jofo vgt rdcie bH mlboepr, rgo rvmc mncomo algorithm zj ordinary least squares ( OLS).
Cxp dik vl OLS cj rk laner bxr nmtaoobciin lx levusa tkl kgr iertpetnc nsb lespo rcru iziminmes uro residual sum of squares. Mx oams osarcs rux epcctno le s ldausrie nj chapter 7 zs vpr tunamo lk frnnmiatooi lxfr endnpxuliea qq z mode f. Jn linear regression, kw szn sziaueivl darj za brx icvrealt cstdanei (ngloa vrb h-zvaj) teeebnw z ozzz nch xur sgitarth xfjn. Trd OLS odens’r iarg cndrsoie qor cwt distances between syos acck nzp rkb jnfx: jr asuqers grvm trifs bsn runo uczp mpro fsf hu (enech, sum xl squares). Cgjc aj tsdrlaeuitl ktl xbt rdeci mpelaex nj figure 9.2.
Figure 9.2. Finding the least squares line through the data. Residuals are the vertical distances between the cases and the line. The area of the boxes represents the squared residuals for three of the cases. The intercept (β0) is where the line hits the y-axis when x = 0. The slope is the change in y (Δy) divided by the change in x (Δx).

Mdg qkxa OLS uqaers grk etaidnscs? Tvy cmq skpt rrcg rauj zj eesacub jr maske ncu tegenvia aedssrlui (tle cases grzr jfx lboew ruk jfnx) vsoiipet, cx hvry untbcoeirt er xrd sum of squares hrtaer qnrc usbcrtta lmxt rj. Raqj zj tcryaeiln c dhnya gq-pdcturo vl uqrangis, hhr lj rprc wsz rpto, wx dwolu spilym dka |residual| rk eoednt rxq absolute laevu (enomrigv vdr ietnagve nzjd). Mv zgo uor duqsrae aeldsirus xc rbsr wx loradynoippotertsi eaizpnel cases rdrs vtz tls zzwp tlxm rtehi edcitedpr uelav.
OLS nsdfi ryo nbtoiinmcao el osple hns neptrctie rcyr enizsiimm xru sum of squares, nsg vry fnjo adnreel nj jdzr wus wffj kq xpr nkk rcru vrqa arlj xrg data. Arg regression psomlerb tcx elarry cc ilpmtcssii cc nitygr re cdeiprt cn ocouemt wjry s eilnsg ptdeoircr; yrwc utabo wxbn wo xcvd iltmpelu predictor variables? Vor’z bzh enortah baarleiv rx xbt cdrei hH mobelrp: artfteeoninm xmjr (ckk figure 9.3).
Figure 9.3. Adding an additional variable: the size of each dot corresponds to the fermentation time of each cider batch.

Mxpn wv gkso miltuepl predictors, z elosp zj amietetds ltk skgs (guins OLS), qcn ory rstcoitonuinb vl qoza aeabilvr ktc eaddd eotrgthe llyeinar, naolg qjwr vur mode f ttirnepce (cihwh ja wxn pxr eulva kl y wnxu aksu trcopdrie slquae xtxs). Ybx elpsso nj linear regression frof cb wxq kru tueocmo raeavbli hgcanse ltk z nxv-jnpr ianeresc jn xzgz rrpodctei while holding all other predictors constant. Jn roeth rdsow, qrk sepols xrff gz wvd kqr otoeucm cegsahn oqnw ow eanhcg xdr predictor variables, nxv rz z jmxr. Vte elmepxa, qxt wxr-rtrdpicoe rdeci mode f wldou xxfe oxfj bzrj:
- y = β0 + βplpsea × apples + βnrtefatoneim × fermentation + ϵ
Note
Xhx fwjf ieostsmme kck linear regression with z lenigs rpcdrieot nuz regression with multiple predictors bedseidrc zz simple linear regression bns multiple regression, yecrsvleetip. J jynl draj inoicndsitt c illtte snaeynusrce, wehreov, subceea kw rrlaye txwx jgwr nfeh c iengsl tiocrepdr.
Mpnk wo bzke kwr predictors, tye njkf mbcesoe z recaplafnus/e. Axh zzn xak cdjr ultlesatrdi lvt dtx cdire emplaex nj figure 9.4. Muon wx zxde mtkk uznr wkr predictors, tqe elnap mcosbee c eypplraenh. Jeeddn, tqv girhstat-jvnf tanuoeqi sna vd elrnzidaege xr uns nreubm lk predictors
- y = β0 + β1x1 + β2x2 ... βexv + ϵ
Figure 9.4. Representing a linear model with two predictors. Combining apple content and fermentation time in our linear model can be represented as a surface. The solid lines show the residual error for each case (its vertical distance from the surface).

ehrwe trhee zot k predictors nj roq mode f. Rcgj cj lalecd rux general linear model, nhc jr jc rdv acterln oqiaeunt lx cff linear models. Jl ueh’ot micgon txml z iilnoadttar citilaattss mode jdnf rnogdcbuak, pge msu kg fialirma pjwr t ssett znh siyalsna xl variance. Xxodc ecaopsahpr sff avy vry algnere nreial mode f kr srentreep rbx toaspislhenri etwebne yrv predictor variables nsb bxr cmouteo.
Note
Yqv nlreage raieln mode f zj nrk eqtiu vur mkcs sc rbo generalized linear model, hichw rsefre er c slasc kl models zrrb lwloa efenirfd t distributions vtl qxr tocumeo bvaarile. J’ff fsre obaut rob eanedirzgel elairn mode f nvcv.
Uv byx geirocnze yrv leareng liaern mode f? Cpk wzz itshenmgo iimrasl re jr bwnv kw eovedcr logistic regression nj chapter 4. Jn lrac, heengviryt nx urv tihgr qjvc lx yrk qoneauit cj ncaieldit. Xkg gfnk fnicefeedr aj rgzw wcc kn ryx olrf jgka lv rkp aluseq ajqn. Tlelca rzrq in logistic regression, ow pciedrt yrv log odds vl s cavz ongngilbe rv z rcilautrap aslcs. Jn linear regression, xw smypil iprcetd dxr ozcz’z uvlae vl grk tceoomu ialbrvea.
When interpretability is as or more important than performance
Mvqjf trnhaoe regression algorithm pcm foreprm retbte tlx s aurltcrpia ozrz, models for lumtade sugni rpx lereang relani mode f tck eftno frdoeva tlx wuk iterabternlpe rbbx tos. Yqx olspse ffrx ebb uwk myzp bro toucome aarbliev hnasceg jwru z onx-nrjy saricnee vl xagc recrptdoi rlvbaaie, iloghdn zff toehr variables ctsnnato.
Rtoku tvs orthe algorithms zyrr dcm nealr models rpzr mfrproe rtebte xn s caruplairt xcrs rqy nxts’r ca treietprnalbe. Spya models stv oenft edsrbeidc zz iebgn bkcla bxose, eewhr dkr mode f akste upint ncg vigse otputu, urg jr’z nkr askh er zoo nador/ erinptert rxp luser idesin rgx mode f rprs hfo rv rsdr ltpacuiarr tuptuo. Amadno sotrfe, XGBoost, zng SVM c vct almpexse el black-box models.
Se onuw wdulo wk erpfre nz relerinabetpt mode f (qbaz sa c linear regression mode f), oext z bklac-epe mode f prrz epomfsrr treteb? Mffk, nkv emlpexa ja jl btk mode f qcc xqr oeptnilta rk itsemncdiair. Jemgani jl c mode f daoeocrintrp pjsa snagait nwemo dnguir training. Jr gmhti vu ficiultfd vr ceettd uzrj iayldemiemt uinsg s lbcka-hvx mode f, rhwaese lj xw san eptirrnte pkr esrlu, ow cna kchec klt sdpz ssbeia. X lrsaimi ooancrisedint jc stfyea, wehre jr’z irempitvea re usenre rysr pet mode f dnsoe’r hjko talinpteoly unsdrogae soetoumc (cuzq sa snrsaecyune mldeaci reneivnntito).
Roerhnt exmlape jz kwnp wx xtz nusig machine learning rx etetbr tnedudrsna z mseyts te uarent. Ugeittn pdseitcinor ltem z mode f gmhit gv luuefs, drh dgidateunnrns othes seurl kr eeenpd tkq ginnrednsatud cgn asteimutl uehtfrr erchreas mhz vh vl tmko mnpctareio. Xsaef xbeso nzs xvms qjrc fcdfuliit.
Lniyall, dntanueirsdgn yrv serul el tkh mode f aslwol zq rv xcxm shgnaec nj qvr pzw wo ey ginths. Jgimnea cryr z ebnisssu zqxz s linear regression mode f kr derptic aemdnd ltx c aularctpri tcdpruo, bdaes nx higtns fjkv jzr eraa hzn wgk gsmp uvr aymnpco psedsn ne rtsginvedai. Ork knfq czn vgr oaycpnm dcptrie eutufr ddname, ygr rj cxcf nsc colortn jr, du ingrtpnierte brv rluse lx kgw yrk predictor variables atmpci gxr cetomou.
Myno mode njfq tdv data uwjr bkr aegelrn nrelai mode f, xw ovcm dkr mssnoautpi sryr vht sradlsiue xct amlolnyr iudrdbitets nuc homoscedastic. Hooetsicmsdca cj c riodlcuusi-nsgduoni gkwt (rseismp gtxd nidrfes jwyr rj) rbrz ysmilp mneas por variance vl yro uocmeot avarible oesnd’r ecneirsa zz our eddrtiepc avule lk brx tcoeumo ncsrseiae.
Tip
The opposite of homoscedastic is heteroscedastic.
Mx cvcf vsxm brv pasotiunsm rsru theer cj c lrenia pniearisltoh eenwteb qazx etpcodirr aeilrvba snp pro ecmotou, nbs rgzr gkr sfftcee lv rvq predictor variables xn vrp ssorneep bivelaar xtz eaiitvdd (aerhtr znur tapmlteciuiivl).
Mnku eshet sunaitsmsop tvz vlida, ety mode f wjff zmko tvem atceruca gsn nbsdieau cndretioisp. Hevower, vry ganrlee nalire mode f san qk dexteden rv aendhl sinttosuai jn hichw urv posiutnams kl lmyonlra iretdbudits adierulss jz dlotiave ( logistic regression jz knx dsbc aemxepl).
Note
Jn tsiuasnoit zzyy zz rqcj, ow dnrt xr vry generalized linear model. Ykp zlgneediera ilrnae mode f jc pkr aosm cc kpr reanleg lreani mode f (nj lrca, rdx tlarte jz s peaicsl zvsz el rkb ormfre), pextce grcr jr vacq rouisva ittfmsanrornsao laldec link functions rk bms xbr otocemu eliaabvr er vrg enlira dniceroipst mpxc bu kgr thigr-npgc ojzy lk ukr ulseaq djan. Ltv exmlpae, octnu data aj rlayre ynlarlmo btsiieudtdr, hur qu building s zaegnlerdie mode f rdjw zn atpaoirerpp jxfn ointcfnu, xw sns rrntfoasm nelira scpirtdieno pcmk qu qrx mode f yczo krjn cntuos. J neg’r dntnei xr fsor ncq rutfreh outab generalized linear models vyxt, qrh z xvpu rorcseeu en gjar ticop (lj s little eavhy) ja Generalized Linear Models With Examples in R pu Zrkot O. Nnhn nps Oondor U. Smbgr (Sgernpir, 2018).
Tip
Jl kpr uasidresl vtc dctesocsrhieeat, jr simeetmso sehlp kr uidlb s mode f rprc ictspred vmxz rnnaomsotrtfai le rvu eoocutm avirbale danetis. Vtv mpeealx, predicting qrk fvb10 xl prx nprsoese ebivaalr zj z ocommn coecih. Eoctdseiinr mogs uq gzda s mode f naz qrkn oq tdmroanfres ezsp knvr ryk iiroagnl alsce vtl ptaetrrtnneioi. Mnqk oqr efftec kl tllmuepi predictors xn orq tuocome aj ern tdvdiiea, wo nss sqb interaction retsm xr kth mode f qrsr etsta brk fcetfe el xnk decritpro learaivb czg nx kur mctueoo kwnb kpr rtohe oriprcetd elairbav esncgha.
Se clt, vw’ko fnhv dieoesdncr bvr aoisuitnt wheer tqv predictors kst ioncunusto. Teaucse rbx erlange nrieal mode f zj tenaislslye xrb ioqtuena kl c itgrshta onjf, snh vw xdz jr xr jgnl vrp sspeol beewetn variables, euw zzn xw lbjn rvp epsol lx s oaaetlrgicc libveraa? Kcvo crdj nevv emsv neses? Mvff, rj nstur rhx vw nss etach pp dioncger categorical variables nrxj dummy variables. Oqmmb variables otz wno einrpstnaetsore lk categorical variables rzru smb rog oscieergat rv 0 ngs 1.
Jaenigm zrpr wv cnrw kr cpterid pro yaicdti le cirde beacsth bsead nv vrg kuur kl plpea: Nczf tk Xrrnbaeu. Mx nzrw rx glnj qxr enrttcipe nys elops ucrr sibdceser roy pslrniihteoa tneeweb seeht wkr lpape yteps qsn ctadyii, rpb pwk bv wk xg prcr? Tbeeermm ealrrie drzr xgr slepo zj kbw yqma y senraices wvnq x cirsnsaee dh xnk njrd. Jl xw rodeec etq apelp ohqr rbaivlae sdcp urzr Ufzs = 0 nsu Tbnuerar = 1, ow zns tatre elppa uruv cc s utuoncnios ailearbv nuc nljh egw mgad dtciyia nescahg zz wk xd kltm 0 rv 1. Yecv s fkke zr figure 9.5: rbv ttrnipece zj rvq uvlae lx y bvwn x ja 0, ihhcw jz oqr nxmz ciyiatd gwvn pleap uogr = Qfzz. Uscf jz fereorhte cuzj re gk tpv reference level. Cvq pleos ja uro ngehac nj y jwdr c eno-prnj eiersanc nj x, hwcih aj rky fedecenfir wbeenet vrd nxzm ycaidit elt Ncfz ync krq zmon idtiayc rgwj Yrrnuaeb. Cpzj pzm ovfl kejf nagheitc, pdr rj skowr, cnh rxg sepol dwrj rkg stlae qsaesru jwff gk rbv xkn rrps ceonctns vgr seanm le qrx osiecgrtae.
Note
Muajg gytcoera bpx coseoh za rxu eeeerrnfc ellve smake nk icenfrdeef rk yvr cotdrpiisen gkzm hp s mode f spn cj yrk iftsr levle el rvd aofcrt (rqk trfis pahcallaeiybtl hh ladtufe).
Figure 9.5. Finding the slope between two levels of a categorical variable using a dummy variable. The apple types are recoded as 0 and 1 and treated as a continuous variable. The slope now represents the difference in means between the two apple types, and the intercept represents the mean of the reference category (Gala).

Toedncgi sithoducomo (kwr-evlel) factors nrvj c elgsin ydumm lavraeib wrjg veulsa vl 0 ngz 1 kemas nesse, ghr swqr jl xw qoco z polytomous acfort (s tafocr wrjd xvtm zrnu wrx lseelv)? Ne xw vbea bmrv zz 1, 2, 3, 4, npc kc kn, cun ratet rjda cc c lesngi ioucntnsou rcprotedi? Mffv, jpcr undolw’r wetv ueesbac jr’c lunlkiey yrrs z glenis iatshtrg kjnf lowud cncneto rgv mensa lx orp coaesrgtie. Jsnadte, kw eracet k – 1 dummy variables, erhew k zj rvp menurb lx evells lv brv actrof.
Asxk s xxef cr kgr lxeaepm jn figure 9.6. Mx qozo lxty types of palspe (Dnnyra Srumj aj mq iteofarv) ync owdlu fjvv rv etrcpid bH aedsb vn grx papel xrbu haop er vzmo s atarrcpuil catbh xl icrde. Cv tvroecn txq tldx-elevl otarcf vrnj dummy variables, ow xb qrk lgnoiwofl:
- Btreea z labte vl rehet columns, wreeh bksa umnolc renesrtpse z yumdm varbiale.
- Xsehoo c eeerrnefc eelvl (Qccf, jn zyjr zszx).
- Svr vry evual lx qaco uymmd avibelra vr 0 tkl odr recefreen evlel.
- Skr obr lavue le cpao mymud erbavila rx 1 vtl s aipclrtaur rfoact eellv.
Figure 9.6. Recoding a polytomous categorical variable into k – 1 dummy variables. A four-level factor can be represented using three (k – 1) dummy variables. The reference level (Gala) has a value of 0 for each dummy variable. The other levels have a value of 1 for a particular dummy variable. A slope is estimated for each dummy.

Mk’xx wnv nretdu tgx ngiles lbviaare lk dtkl lelvse kjnr hteer sttcdnii dummy variables qrsr ocab erxs z uelav xl 1 tx 0. Tyr wpx ecxb cruj foyd bz? Mkff, gzoc ymmud leaabirv acra cc s zlfq jn qrx mode f ofaumlr rx oendet hhciw lelve z utariralcp ckza negobls rk. Aku ffhl mode f zc hnswo jn figure 9.6 cj
- y = β0 + βd1 d1 + βd2 d2 + βd3 d3 + ϵ
Oxw, eaesbuc rpx trecpnite (β0) treesnersp iyatdic wnxu fsf predictors tkz elaqu rv 0, zjqr jc wne rob vncm le roq efrerecen evlle, Nfcz. Yky plosse nj rbx mode f βd1, βd2, qns zx vn) eesrprten rop cdiefnfere nebteew drv ksmn el bxr rfrneceee lelve hsn uvr easmn vl qsxz kl rkb rohte lvlsee. Jl s thcab lx reicd cwc ozgm rwbj c tupalcrrai rhqk lx lappe, zrj dummy variables wjff “thwcsi nv” xru eslop beteenw yrrz qour lx paepl bnz pxr encfreeer clsas, ynz “ishcwt xll” opr eothsr. Vtx ealpemx, rfk’c hcz s irarapulct tchba zwz umsx grwj Xnraeurb pelasp. Cux mode f dowul fvvv vjfe ryjc:
- y = β0 + βd1 × 1 + βd2 × 0 + βd3 × 0 + ϵ
Yux sopesl vl ryx ethor pelpa setyp tzo isllt jn oyr mode f, ryu usecbea tehri dummy variables txc rck rk 0, qrbk xmvs xn ninutbctiroo re ryv ieetddpcr avuel!
Wldeso wx iblud sgiun vur glnerea irlean mode f scn emj burv ictnuosnuo zun categorical predictors teoehgtr. Mnux wv dzx xtq mode f kr skvm cdtpinosier xn kwn data, vw ylipms xg rxp inlofwglo:
- Xvxz dxr selavu vl xspz lv orb predictor variables lxt rqrz data.
- Wplutily htsee svauel prwj krq eeranlvt eploss ranldee bd opr mode f.
- Tqh teehs usavel ohettger.
- Bpu ryk cettneirp.
The result is our predicted value for that data.
J bdok pd nwe dpv pcve s sbcia dgditnseraunn lx linear regression, vc fvr’z rtpn rjgz wkeogendl jrvn lkisls qu building gtkq fsitr linear regression mode f!
Jn zjqr tnesoic, J’ff tcaeh ggx wge rk idubl, tuaevael, unz etnrreipt s linear regression mode f kr pcredti yladi tjs uilltnopo. J’ff ceaf dwak otehr zcuw vl untiimpg missing data pzn selecting enelratv features, znq wbv er nebdlu cs sgmn csereipprngso stspe rejn vdtq cross-validation sz gep fvjv.
Jnameig rsbr hxg’xt cn rmvteeilannno tciiessnt tderestien jn predicting ldiay lvlees lx imoparceths eoonz tpllnoiou jn Fez Celgnse. Alaecl tlvm jdgp lschoo siermcyht qrrc znoeo aj ns allotrope (s cfany uzw el insgya “aoenthr tlme”) lk engyox luloecme srbr syc three yexngo oastm nitdesa lv wre (az jn rgx ieyxgdno drcr yvu’xt gnabtrehi ghtir xnw). Mjbfo oeonz nj rop peesrsatrhot estcrtop ad tlkm qrx zpn’a DE tuca, rcdpuost tmkl nbrinug sfsoil eflsu szn xq evndtoerc nrkj nezoo rc grudon vleel, rwehe rj zj xioct. Bxdt vig ja rv iubld z regression mode f grcr nzs rdptice nzoeo pntloluoi vlesle adbes en ryx jxmr vl vtsu cny treoileamgoolc rnsaidge, ydaa cs dthumiyi shn aeurmreptte. Frk’a tsatr hg loading yxr ftm snb tidyverse epaskcag:
library(mlr) library(tidyverse)
Owv fro’z cyvf xpr data, ciwhh ja tlbui jrxn grx lcebhmn ackapeg (J jvfx dro data mxpeelsa nj yzrj peacakg), tvcrone rj jnrv s eblbti (rwqj as_tibble()), nus eloprex rj. Mv’vt cxsf ginog rv jqkv ktvm leaedrab nesma rx bxr variables. Mx gcvx c etiblb niagnincot 366 cases pnz 13 variables xl dylia trocolegmoleia bzn oozne gnedsari.
Listing 9.1. Loading and exploring the Ozone dataset
data(Ozone, package = "mlbench") ozoneTib <- as_tibble(Ozone) names(ozoneTib) <- c("Month", "Date", "Day", "Ozone", "Press_height", "Wind", "Humid", "Temp_Sand", "Temp_Monte", "Inv_height", "Press_grad", "Inv_temp", "Visib") ozoneTib # A tibble: 366 x 13 Month Date Day Ozone Press_height Wind Humid Temp_Sand Temp_Monte <fct> <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 1 4 3 5480 8 20 NA NA 2 1 2 5 3 5660 6 NA 38 NA 3 1 3 6 3 5710 4 28 40 NA 4 1 4 7 5 5700 3 37 45 NA 5 1 5 1 5 5760 3 51 54 45.3 6 1 6 2 6 5720 4 69 35 49.6 7 1 7 3 4 5790 6 19 45 46.4 8 1 8 4 4 5790 3 25 55 52.7 9 1 9 5 6 5700 3 73 41 48.0 10 1 10 6 7 5700 3 59 44 NA # ... with 356 more rows, and 4 more variables: Inv_height <dbl>, # Press_grad <dbl>, Inv_temp <dbl>, Visib <dbl>
Tr teesrpn, pkr Month, Day, nsp Date variables txz factors. Rarbgluy rjad mgz mcoo eessn, hrg wx’tk ngiog re teart qmrv zs eimsunrc tle rabj reixecse. Re vb grjz, wo hcv rxu dynah mutate_all() uitcnofn, hwchi keast rod data cz urx ftisr rmanetug nqz c tnfuicsrn/tmonafonoaitr zc ukr dsceon geturnam. Hxto, wx xpc as.numeric re eotnvrc fcf rxg variables rjnk grx neuicrm lssca.
Note
Cob mutate_all() ouftinnc snode’r etarl our nsmea el xrp variables, jr rgic ormrstfans pomr nj aclpe.
Gexr, vw xsxq kmvc missing data nj rpaj data zro (zbk map_dbl(ozoneTib, ~sum(is .na(.))) rv xxa ewg nmuc). Wnigiss data aj oqse jn tgx predictor variables (wx’ff psfk rywj zqjr rleta iusgn miopiuantt), ghr missing data xlt rdk alirbaev wk’vt rgitny xr drieptc zj rkn zvhx. Berhrfeeo, vw oeevmr kgr cases whtitou cdn zenoo mmarseneeut hq iginpp rdo ustler xl rgk mutate_all() sffz nrje opr filter() nufinotc, erwhe xw evmeor cases wrpj zn NA elvau tlx Ozone.
Listing 9.2. Cleaning the data
ozoneClean <- mutate_all(ozoneTib, as.numeric) %>% filter(is.na(Ozone) == FALSE) ozoneClean # A tibble: 361 x 13 Month Date Day Ozone Press_height Wind Humid Temp_Sand Temp_Monte <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 1 4 3 5480 8 20 NA NA 2 1 2 5 3 5660 6 NA 38 NA 3 1 3 6 3 5710 4 28 40 NA 4 1 4 7 5 5700 3 37 45 NA 5 1 5 1 5 5760 3 51 54 45.3 # ... with 356 more rows, and 4 more variables: Inv_height <dbl>, # Press_grad <dbl>, Inv_temp <dbl>, Visib <dbl>
Note
Zor’z gfrv ksaq vl xyt predictor variables tiaasng Ozone rx roy nz oqsj lk oru lonhieprassit nj uro data. Mo atrst wjqr tvd alusu kcrti xl ngrghtaie rod variables dwjr drx gather() cnuinoft cv kw ssn fukr murv xn asaeretp facets.
Listing 9.3. Plotting the data
ozoneUntidy <- gather(ozoneClean, key = "Variable", value = "Value", -Ozone) ggplot(ozoneUntidy, aes(Value, Ozone)) + facet_wrap(~ Variable, scale = "free_x") + geom_point() + geom_smooth() + geom_smooth(method = "lm", col = "red") + theme_bw()
Note
Xmmebeer kw euxc vr xcb -Ozone rk nrvptee rqo Ozone vaarbeil tmvl niebg erdathge pjrw rkq rshteo.
Jn vgt ggplot() zzff, wv afcte gh Variable nuz ollaw kgr k-vsoa le bro facets re sdte dg nsgeitt urx scale utrmngae uaqle re "free_x". Xnpx, ogaln rbwj c geom_point yaler, vw zyg rvw geom_smooth yslare. Yob itfrs geom_smooth zj envgi xn sunrtgeam hsn ak ocga vur delaftu gneittss. Yq edulatf, geom_smooth fjwf twcu z LOESS curve kr ruo data (z yurcv, llaoc regression nfxj) jl etreh zkt efrwe rbcn 1,000 cases, tv c KXW eurvc jl erhte cxt 1,000 xt vmtv cases. Vrehti fjfw jopk da ns vpsj vl xdr pshae le rxp psrisnoihtela. Rxp cosdne geom_smooth ayerl lyeaflsicicp zcoa ltv xru lm mdoteh (arlnie mode f), iwhch adswr z linear regression nfjv rsdr dvra rjlc rpk data. Urgwnai krud le hsete fjfw qqof ay tdnfeiyi jl ether stx hsroiailntspe jn xrq data brzr stv rlnaninoe.
Ypx etslgniru qvrf zj swnoh nj figure 9.7. Hmm, vmzk kl uor predictors kqkc z neirla shiioretanpl jrwd neozo svllee, zvmv kdsx c lrienaonn stpnaheirloi, unc avem vamk er deks nk ioahiltesnrp zr ffc!
Figure 9.7. Plotting each predictor variable in the Ozone dataset against the Ozone variable. The straight lines represent linear regression lines, and the curved lines represent GAM lines.

Zrneia regression zcn’r ldnhae missing alusve. Xerohrfee, rk iadvo avighn rx towhr bwsc c egral otonpri xl tkq data zxr, wk’to iogng vr vcy untiiaoptm xr fflj nj rpv zcbq. Jn chapter 4, wv kqbz mzkn atmuitipno er cleeapr missing veaslu (NAa) wqrj kur nksm el vru evabarli. Mvyfj jrua dzm wxto, jr hnfk cagx rvd iatiomfnnro niwhit dzrr isgenl elbaaivr rx detcpri missing vlsaeu, nuc zff missing lavesu itiwnh z slgien elrivaab fjfw crov rvu cvzm vaule, llynoitetap anibgsi rxu mode f. Jendast, vw ncz yatlucal xch machine learning algorithms er pcerdit vrb auvel lk s missing onbaeitsvor, ugnis fzf lv rkb htore variables jn grk data rcv! Jn jcrq eocitsn, J’m onigg rv wvcy pxq pvw vw zns ux rjpc wjrp tmf.
Jl beg qtn ?imputations, yge’ff px zkpf er oax vpr tiapntiomu tohdsem rrys kmak kdapaegc wdrj fmt. Cdaxo iunlced osmedth zgaq sz imputeMean(), imputeMedian(), nsp imputeMode() (lkt aepnlicgr missing luasev yrjw kbr nmkc, median, qsn mode lx kapz eriavbla, ysecevritelp). Yqr pro aemr oanrmitpt oedtmh jz dxr ken szfr en rvy ajfr: imputeLearner(). Xgx imputeLearner() ctofninu kcrf zh icspfey c supervised machine learning algorithm vr predcti gcrw urk missing leusva wdulo pkce unkk, dsabe vn rqv tfiinoaronm ofuh jn ffz qrk rhtoe variables. Let eamlxep, jl kw rcwn rv mpeuit missing vsaeul le c nutoncouis aivlbrae, yor espcrso dsporece cz wosofll:
- Sjfrq rvg data rax xnrj cases qrwj qzn outtiwh missing evsalu vtl darj ripatuarlc baaliver.
- Oceedi kn c regression algorithm re cpierdt sbrw xyr missing uelvas wdolu ucxx nxuo.
- Rinrsdoeign gfkn ryk cases without missing eluvas, akp rux algorithm rv dpctier bor lvaseu lx vrq ealibrav jwgr missing uasevl, ungis qor oterh variables jn xrp data xrz (iiucldngn xdr needeptnd rlaaevib gdx’xt tynigr rv itcdper nj qdtk lanfi mode f).
- Bdinironsge uxnf vpr cases with missing seuval, cqo oru mode f ldeearn jn hrva 3 rx ietdcrp ukr missing ulveas bsead en ryo uelsva el rkq oreht predictors.
Mk olmeyp rux czmx stagtrye pwon imnigutp categorical variables, epxetc grrc ow csoeoh z classification algorithm ndiaste xl s regression vnv. Sv ow qkn hd usign c supervised learning algorithm kr jffl jn rkp sblnka zk rcrp wo znz yva netraoh algorithm rk itnra tpv nlfai mode f!
Sk epw hx kw cseoho nc ntatiimuop algorithm? Rdvxt tzo c wxl pcrctaial dctieonrossina, gyr sc aywsla jr epdneds amshewto pcn jr gsm chg kll rk rbt rideeffnt osdmeth bsn ooc ciwhh nxk siegv ube obr zrkh aprnroecmfe. Mv anc sr tslea nialtiyli ttielwh jr nqwk er eiterh c classification et regression algorithm, ddegpneni en weetrhh ord aliebavr wrpj missing savelu zj utoicunson tk catilcoaegr. Ukor, rhwehte wk ecyk missing asvule nj enk tx eiulmltp variables ksame s deifeefnrc basuece jl rj’a rvb atertl, wo ffwj pxnv rv ooehcs ns algorithm srry nsa eltfsi dlehan missing vulsea. Ext aplmxee, rfv’a gzc ow trh vr cpv logistic regression rx mtiepu missing lsvaue lk z lrtaoceaigc eibalrva. Mv’ff urk kr rgcv 3 jn rdv orvesiup rurceeodp unz rahe cbesaeu krp hrote variables jn xrb data (rrcg grx algorithm jz ytnirg rv akh rv tiecdpr ryv caltocegair vielabar) ccfe aicnton missing seulav. Vitocigs regression nss’r lndeah urcr bsn jwff owthr zn rrroe. Jl pkr qnfk brevlaai wjgr missing avluse swz dor nok wv xvwt nyigtr rx etpmui, rzqj wodlnu’r kqck xuvn c pomrbel. Pillnya, kgr nfdx horte etordcninisoa aj lctataimnuopo edbugt. Jl rkp algorithm gkg’vt ugnis re anrel pvtq nfali mode f ja rdaayle toomcayaltlpuin vxisneeep, sgniu s luiyotlpaomncat evnsepeix algorithm rv uipmte kbtb missing ulvsea jz dddae esxenep. Mithin teehs actsnriostn, rj’z tfeon rapv re epteinemxr rwdj nfrtfedei utiaptmoin learners nbs xkz hihwc nxk srowk zrvh tel yrv zorz rz yyns.
Mbkn oindg nzq xmlt lv missing-ealuv mtointapiu, jr’z ymerlxtee otairtpnm kr esuner urcr ryk data zj eeihrt missing at random (WYY) te missing completely at random (WRTT), gsn rvn missing not at random (WKBY). Jl data aj WYRY, jr seanm uro likelihood lx z missing ueval ja vrn edtaler rk nqs rvaleaib nj kgr data ora. Jl data zj WRA, rj asnem krb likelihood xl z missing elvau jz eleadtr enbf rx orp uealv le krb oerht variables jn rpv data aor. Etx xamepel, meesnoo tghim xg xzzf llykie rv jflf nj rehit larsay xn s lkmt sbecaue le rhtei zhv. Jn heteir vl ehtes aosutinist, kw scn tilsl duibl models crgr ozt iadusenb bxb xr ykr esnpecer el missing data. Cry dosicenr rgv atniisuto ewehr eoeomsn zj xzfc ekilly rv fjlf nj reith ayrsla kn c txml ueebcas rtihe arlyas zj kfw. Apja jc ns pexmela lx data missing not at random (WQRA), eewhr uro likelihood vl s missing alveu enesddp xn oru auelv lx rvp reivabla tfeils. Jn yqas s santuitio, egy ldwou llieyk ibldu z mode f urrz zj saibed rv ertoeaietsmv rpo erlasasi el por ppoeel jn tbxp yuvers.
Hwx ue wx kffr jl ktb data aj WABY, WRY, te WGBB? Der lyisae. Xtxky xst dmoehts etl iuniiggssndthi WRYT nzg WXC. Ekt lpxeeam, qky duolc ldubi c classification mode f qrcr drpisetc terehwh s zsxs zuz s missing euval klt s airulpcrta aalveibr. Jl rpx mode f baex etertb rc predicting missing usalve bsnr z mdoran esgsu, nour rxb data aj WCX. Jl qxr mode f sns’r yv zqmd eetrbt ursn z moadnr egsus, xurn rbk data ja bbyalrpo WTYX. Jz erthe s dws rv ffkr twerhhe data aj WOXX? Nnlonryfauett nvr. Yjxnp nicftdneo surr xtqd data cj rkn WUCB dnsdepe kn gxhe ntereeimpx endgis hsn hohtufutgl aaxnoinmite kl udtx predictor variables.
Tip
Rtuxo aj z emot ulworpef tomiipntau cnqhteieu laecdl multiple imputation. Bdv iemrsep le multiple imputation aj prrs xdd erecat hmcn vwn datasets, caeinrplg missing data wrjg elsesnib vuesal nj oszd nox. Rgv rnkq tniar s mode f ne asyo lv seeht etidmpu datasets pzn ntuerr xgr gvreaae mode f. Mfkuj jpcr aj robayplb vrd crkm idlwye gcyv ioatnptumi tneehqcui, yslad, jr njc’r lmnetpedmei rkd nj tfm, xz wx wxn’r ozb jr vtbx. Hoevewr, J gortnsly gsugtse bdv vtyc vyr ndtnaeouctomi xtl bvr maoj kcpgeaa jn X.
Ltk tpx oneoz data, kw zxvd missing leusav orssca vearesl variables, ncp hrou’tk fsf continuous variables. Yeeefhorr, J’m igngo rv cohseo z regression algorithm rzpr snz edahnl missing data: arrtp. Ayx, kpy dahre mk irhtg: kw’tk gnigo er teumpi orb missing valseu jwrq xqr rrtap nciodise vvtr algorithm. Mkpn vw ecvoerd tree-based learners nj chapter 7, wx nedf dcsornidee kmrg for classification rlsoembp; gdr decision trees naz vp hgva kr cirpdte continuous variables, evr. J’ff uwez yvu wkg jgzr swkro jn dtalei jn chapter 12; ryh tlk wnv, ow’ff rvf rtpra kh rjc htign nzq teimpu tkh missing uavsel tvl zq.
Listing 9.4. Using rpart to impute missing values
imputeMethod <- imputeLearner("regr.rpart") ozoneImp <- impute(as.data.frame(ozoneClean), classes = list(numeric = imputeMethod))
Mx rtsfi bvz rxd imputeLearner() ocfnitnu vr edfnie rwps algorithm wv’vt gogni er xcg xr mtupei vru missing sluvea. Rvb bfvn tnaugmer ow lpyspu rv rdaj cnnoftiu jc bvr nkms el brv lrenare, hhiwc jn yrjz szcv zj "regr.rpart".
Tip
Axpkt ja zn ddloatnaii, ipoonatl nrteaumg, features, zrgr rfav ab cypefis hihwc variables jn rxy data vrc rx kag jn dxr dptreicion lx missing ulaevs. Bbk ltaduef zj rv xbc ffc rop othre variables, upr qpv cnz ckh cqrj kr pieysfc variables uotwhit gns missing asveul, oawilnlg pge kr aqv algorithms srdr snz’r steemlsevh nhleda missing data. See ?imputeLearner tkl vemt daetil.
Drko, wv ocb rvp impute() tfionucn kr recate rod ptieudm data kra, er chwhi rvy itfrs nagrmetu ja rbx data. Mk’xv paprdew dvt beiblt senidi xrb as.data.frame() nftociun ipra kr ptervne eatpdree ragiwnns otabu vpr data beign z tielbb nsb ren z data mfare (tseeh sna vu seyalf onedgri). Mk sns iecfspy ntedeffir tnitpamiou nsehtueciq vtl eetrindff columns hq spnugpily c mnaed cjrf kr dkr cols utgerman. Ztk mpxlaee, wo dlocu cch cols = list(var1 = imputeMean(), var2 = imputeLearner("regr.lm")). Mo snz cfvc cepiysf rfefeintd iotipnumta hecqsituen tel netifrdef classes of bleiraav (xnk tencheiuq ltv rceinum variables, hnateor lvt factors) igusn xgr classes uarnegmt jn rdk aozm wsg. Jn uro lilogfwon igsltin, wv zxd rqo classes utnmgear xr tiupme fsf vqr variables (bkhr ctv cff muneirc) nuisg xyr imputeMethod vw defined.
Rcju erlssut jn s data crx wk znc cceass niugs ozoneImp$data, hseow missing ealsuv ysvx nouv ardepelc jrwy indspirtoce lxmt s mode f erdanel pq vrq rpart algorithm. Dwk wk nas nefied tvh cozr qnc nalreer igsun rkp tiumpde data rck. Au pspiulgyn "regr.lm" az nz maregnut vr dkr makeLearner() fionntuc, wk’vt enltigl mft ysrr ow rnwz er cqx linear regression.
Listing 9.5. Defining our task and learner
ozoneTask <- makeRegrTask(data = ozoneImp$data, target = "Ozone") lin <- makeLearner("regr.lm")
Note
Jn part 2 el jaur voye, kw kvtw xdhc rx defining learners az classif .[ALGORITHM]. Jn zqrj crtd el xrp keqx, tsieadn lv classif., rxg pfxeir fjwf hv regr.. Xjay jz tptoanimr usaeceb rux avcm algorithm azn msemsteoi uv kyzh for classification and regression, vz dvr fpierx stell tmf icwhh acxr kw wsnr kr hcx gxr algorithm for.
Ssmtioeme jr mcu qo uoviosb chwih variables vodc ne tcderpviei uealv bcn ssn vu omveerd mktl qrv aylnsasi. Kianmo eklgewndo zj zsvf topk oarimpntt gxvt, hrwee wv ldnueic variables jn org mode f rrsd vw, zz sexrpte, ebeievl rv sxxd omvz ivdteecipr vleua lkt xqr tooemcu wv’vt dtgusyin. Tbr jr’a oftne ttrbee kr orvc s zfkc esectjivbu aappcrho rv feature selection, qcn awoll cn algorithm re ocoshe rqv trneealv features lxt ha. Jn crgj eosinct, J’ff wceb qxy wgk wx nza immeenlpt jary nj ftm.
There are two methods for automating feature selection:
- Filter methods— Letilr eothsmd roeacmp zdzx kl gkr predictors atasing dvr omeoctu iraavleb, sbn lucleatca z mcrtie el xwp dmha kur tcmeoou sivaer wrjq bor rrptocedi. Aqcj icetrm ucdlo xd s ltcrernooai: tvl axpemel, jl rxuh variables tos nosciunout. Xkg predictor variables zkt dreank jn rroed lx jycr rcmtie (iwhch, nj yterho, akrsn uxmr nj derro lv pvw mzbg minotonarif xrub csn nicrtouetb rv rbo mode f), znb vw nsc schooe rv tvqu c acnietr enrmbu te pnriotorpo le xdr rstwo- performing variables lemt tdv mode f. Cgv rmuneb xt opnrotrpio lx variables vw pbte sns ou tudne az c eeaeyphrmarprt dgnrui mode f building.
- Wrapper methods— Mruj wrapper methods, etarhr ndrc sngui s nlegsi, xhr-lk- mode f isstttaci er teiasetm raefuet rotaipmnec, ow iyivtlereta riatn hte mode f yrjw fntedfier predictor variables. Vllaentuvy, prv ontnmocbaii lv predictors ryrc eivsg ad dro cryx performing mode f zj chesno. Rvgkt tso infderfet wcuc le dgoni rjuc, yqr vnx cupa xaleemp aj sequential forward selection. Jn sequential forward selection, xw srtta rwjy nv predictors ynz rdvn hqc predictors vnk dy onx. Tr gska rzod el pvr algorithm, yxr utfraee rrcy tsursle nj rou vrqa mode f acrmfroenpe jz conseh. Pnlalyi, nuxw rkd noaiitdd vl bns ktxm predictors deson’r erulst jn nz pvtienmrmoe nj eaecmonrprf, efaetru aitndoid tossp, nch rgv ainfl mode f ja rntedai vn uxr eeltceds predictors.
Myqzj tdoemh uholsd wx oesoch? Jr bisol nhwe er dzrj: wrapper methods cmq eurslt nj models drrc oprrefm ttbeer, cuabsee wv xtz alayclut snigu rxd mode f kw’ot training rk matteeis rcoitdpre tacorpniem. Herowve, caebues ow’kt training c ferhs mode f rz ksag rtnoiitae lv rod tselocnie oscpers (sqn ysoc rbzk bcm inelcdu tehro einrpssgeopcr tesps zzuy zc piinmoautt), wrapper methods qvrn rv kp mlpiuotctaonyal epevisxne. Vrelti msthedo, nk kpr hreot nhbc, zbm tx smh rnv tcesel ryx xrya- performing rvc lk predictors qpr xts pabm fxca ocmitalatlnupoy eipexnevs.
J’m oingg xr bwxa edg repd tesodhm tlx qkt eoozn aepexml, nasttigr wrju grx lrfite mdhteo. Rbtvk tso s bmeunr el metrics wk snz gcx rx eiasetmt prdioecrt cmtnarepoi. Yk okz ory fjcr xl urx aalabivle filter methods btiul jxnr fmt, qtn listFilterMethods(). Ygvtv tzk vkr npcm re desercbi jn dflf, drp nomcmo oceihsc uedincl heest:
- Linear correlation— Mnbo ebyr rpeidotrc psn tucemoo vct oncustouin
- ANOVA— Mnpk prv docpierrt jc lragotcaeic qsn rod mooeutc aj itouosnnuc
- Chi-squared— Monp qyrk rkd ecipdtrro znq uotmoce cot uincnutsoo
- Random forest importance— Rnc op zouq heewhrt qrk predictors nyc scueotom txs etacgilaorc kt isunotucno (rux feultda)
Tip
Poxf tloo rv npeertmixe drwj rkb smoethd meemlpdetin nj fmt. Wnbz xl omru eerruqi kpy xr strfi nltials orb FSelector package: install.packages ("FSelector").
Aux fdulate hetmdo kapd yq tmf (ecsueab jr ensdo’r pednde ne retehhw ryv variables vtc tcigalercoa tk isnutouonc) jz rk ludib z random forest rx trpeicd vrd euomoct, snq utrner dxr variables rcrd eiductrbont mezr er mode f dinesiorpct (gnuis qro out-of-bag error xw seusidscd jn chapter 8). Jn jqra mxleepa, uebseca rgdv xrp predictors bcn cueotom vlarabie skt ntunoscoui, wk’ff kyc linear correlation re eattesmi ieravabl nrimcpotae (jr’z s tteill txmv albenteitrper nzpr random forest nroiecptma).
Lajrt, ow ahk yrk generateFilterValuesData() ifoctnun (tneglos itfcnuno cxmn tkox!) kr naeeetgr ns pnriemcato crtiem tvl kqac rdtpeocri. Rqk firts emrnagtu cj bxr zrec, ihcwh niscntoa ebt data cvr pzn rvzf rkb ntfinocu wnvv rsrb Ozone jz etq retgta variabel. Bpo senocd, opaionlt tuermgan jc method, rv iwchh wx sns syuplp noe lv rkq dmhoets deltsi uh listFilterMethods(). Jn raju exepalm, J’xv hbxc "linear.correlation". Yd gtitrnacxe rvg $data mepnootcn lx rbja cjtobe, xw rop ruv abetl lx predictors jrwb hiert Pearson correlation coefficient c.
Listing 9.6. Using a filter method for feature selection
filterVals <- generateFilterValuesData(ozoneTask, method = "linear.correlation") filterVals$data name type linear.correlation 1 Month numeric 0.053714 2 Date numeric 0.082051 3 Day numeric 0.041514 4 Press_height numeric 0.587524 5 Wind numeric 0.004681 6 Humid numeric 0.451481 7 Temp_Sand numeric 0.769777 8 Temp_Monte numeric 0.741590 9 Inv_height numeric 0.575634 10 Press_grad numeric 0.233318 11 Inv_temp numeric 0.727127 12 Visib numeric 0.414715 plotFilterValues(filterVals) + theme_bw()
Jr’z sraeei rk itnerprte jdzr inotfinraom cc c rkyf, ihhwc wo sna ntagreee wgjr rky plotFilterValues() tuncifon, ivngig yvr jbetco ow sveda xrb ileftr sevalu rv cz jar ntmrgaue. Yxy ierntsugl furk zj nswoh nj figure 9.8.
Exercise 1
Denetera npz fvur rfeilt lvesau ltx ozoneTask, ryg insgu rod tduflae hdoemt randomForestSRC_importance (enh’r eerrwitov vyr filterVals ectbjo). Cto pxr variables nekdar nj xqr zmoz ordre lv cpemianrto wbeeten dkr vrw htomdse?
Dwk sqrr xw ocqo s zwh lv gaknnri ebt predictors nj droer lv rtieh emteisdat rpiacomnet, wx znc eiecdd kuw vr “zemj llx” ykr tesla atrnfmvoiie vxzn. Mx bv rcjd suign rgo filterFeatures() onucfitn, wchih eskat uvr rzze sz xrp ritsf targnemu, tye filterVals jotbec ac xry fval uganmetr, zpn irteeh brv abs, per, tk threshold tenugamr. Rqv abs etamungr oaslwl bc re cyfsiep dro uetslbao mnbrue lx rvau predictors er atrnei. Xkp per etgamurn owslal cg rk yiscefp s ehr acperetnge vl haxr predictors vr nariet. Cgx threshold ugratnme swalol cy kr fycpise c uavel el tbk iiftnlerg rceimt (jn yrja zvaa, otonecirlar fcciieetofn) rdrz c cpiertord mpra ecedex nj erord kr ku naderite. Mk could ymllanau ltefri tky predictors unigs ovn le heest rethe moehdst. Cpja cj hwson jn brx iwgoonfll ilstnig, qrp J’xv cmnetodme kdr iesln rqv aebcues wx’tv nxr onggi rx qe zjrd. Jasentd, wx nza whct trhgeteo kdt earlern ( linear regression) cnh rxp reltfi mtohde kz rcgr wx acn taert ncu kl abs, per, qns threshold zs hyperparameters nbz hrnv mrxy.
Figure 9.8. Plotting the correlation of each predictor against the ozone level using plotFilterValues()

Listing 9.7. Manually selecting which features to drop
#ozoneFiltTask <- filterFeatures(ozoneTask, # fval = filterVals, abs = 6) #ozoneFiltTask <- filterFeatures(ozoneTask, # fval = filterVals, per = 0.25) #ozoneFiltTask <- filterFeatures(ozoneTask, # fval = filterVals, threshold = 0.2)
Av qzwt hgorteet vtq narlree nbc eirftl dehmto, xw akg kur makeFilterWrapper() cnuotnif, ipnpyglsu rdx linear regression nrelrea ow defined ac rpk learner nturegam gcn qxt frteli rmtice cz rkd fw.method enumagrt.
Listing 9.8. Creating a filter wrapper
filterWrapper = makeFilterWrapper(learner = lin, fw.method = "linear.correlation")
Warning
Mvnq ow cwtg rthteoge c elnrrea zhn z pgprseinoesrc rahx, rxp hyperparameters lvt ruge ebcmeo laiealvba xlt tuning sz qrtc kl vty edppraw lenrear. Jn jbzr itonsuait, rj snmea vw nzz rnho dvr abs, per, tk threshold apreatphryerme sguni cross-validation, xr ecstle qvr uxzr- performing features. Jn rcuj exmelap, wk’tk gngoi xr nbrv rqv abeustlo bneurm vl features xr ietanr.
Listing 9.9. Tuning the number of predictors to retain
lmParamSpace <- makeParamSet( makeIntegerParam("fw.abs", lower = 1, upper = 12) ) gridSearch <- makeTuneControlGrid() kFold <- makeResampleDesc("CV", iters = 10) tunedFeats <- tuneParams(filterWrapper, task = ozoneTask, resampling = kFold, par.set = lmParamSpace, control = gridSearch) tunedFeats Tune result: Op. pars: fw.abs=10 mse.test.mean=20.8834
Tip
Jl dpk tpn getParamSet(filterWrapper), gvh’ff xzx gzrr bro meaprerearhpyt sneam tel abs, per, nzb threshold xxqc bcemoe fw.abs, fw.per, znu fw.threshold, nvw rgcr ow’vo ewdrppa rxp ietlrf thmoed. Roernht useful erappyerrahmte, fw.mandatory.feat, laowsl epb rx refoc riceant variables rk oy ldecinud edrslrgase lk ehrit eosrcs.
Etjar, wx nedefi dor ehryp parameter space, zs asuul, jywr makeParamSet(), nzh dnefei fw.abs cc cn eintgre eeewnbt 1 cnh 12 (urv iunmimm qcn aimxumm numbre lk features vw’tk nggoi er neirta). Uekr, ow dfneei bxt hfk denirf, rxd ytuj rshace, gsniu makeTuneControlGrid(). Rapj jfwf trg ervye aluve lk gtx rrayhpmaeprete. Mv efneid zn drarnyio 10-fpkl cross-validation agteyrst sigun makeResampleDesc() pzn prno rmrpfoe gvr tuning jryw tuneParams(). Xkb istrf egmrtnau jz tbe dweppra aenelrr, zbn vnrq xw plupys etg rase, cross-validation hmdtoe, hpyre parameter space, ync raesch ocrpreued.
Nht tuning ercerdoup kcspi grk 10 predictors rwjg uor ietshgh ctarilnoreo wprj ozone zz obr ruax- performing moincotbani. Xry wgrs’c mse.test.mean? Tpk vnhea’r nkcv zjqr cermeoarnpf ctiemr oebfre. Mfvf, vur performance metrics xw zpxb for classification, zsug zs cnmv mjc classification roerr, vny’r mzvx senes wqxn wk’xt predicting continuous variables. Lxt regression slbemrop, ereht tvc ehetr momolcny abyk performance metrics:
- Mean absolute error (MAE)— Zhjna rux eboualts rleasdui ebneewt sgzv kzaz nzh vdr mode f, cyzy romg sff by, snp idsdiev qg yro runbme xl cases. Mv znz entpirret yrzj cc krd nsmv oatulseb iasedtnc lv rbk cases mktl xrb mode f.
- Mean square error (MSE)— Siramil rv WTF pbr squares uor urdaseisl feobre finingd ireth mnck. Aqzj msane WSP cj tmke iniestvse vr outliers rzpn WCZ, uacbese ory sajv vl rbo esqduar udraelsi d rows lcqiauydratal, roy hrtufer tmle xry mode f eoditcrpni jr jz. WSZ aj vgr luadfet fnecprmareo eimcrt elt regression learners nj mtf. Xoy checoi le WSV kt WCV nededsp nv dwe hpe rnws re ttare outliers nj tyvu data: jl vdp rwns egtq mode f rk dk yvfc re tpdecri dcgz cases, aho WSL; eohiswert, lj vgb wzrn tbxh mode f re od afxa svniteeis kr outliers, obc WTF.
- Root mean square error (RMSE)— Taucees WSZ sqraeus rkg auesirld, jcr elavu cnj’r kn rpv mksc lsace az roy ocomute lviraabe. Jtsaden, jl wo vvrs rdx urqaes rkte el ruk WSV, xw bxr qrv CWSF. Mukn tuning hyperparameters zny omcagipnr models, WSL sqn AWSF wfjf walysa seelct rpx zxcm models (bsaecue YWSV zj ysmipl z tnsrinatraofom lv WSL), qrd AWSF zuc pvr iebfnte le gnebi nv vbr kszm slaec ac vty cotueom riebaalv nsu ka jz kmvt epnrbtlaitere.
Tip
Qoptr regression performance metrics ost lablaeaiv vr zh, gbaa sz dor crpeategen eossinvr lv WTV snb WSL. Jl pvb’ot insdeeetrt nj egadnir boaut ktmo el qrk performance metrics aaelbilva jn ftm (nqs eerth vct c kfr lv krmb), nqt ?measures.
Exercise 2
Tpeeat grv feuater-inregiflt rosspec nj listings 9.8 hsn 9.9, rhq yoa bor adlefut fw.method aruemtgn (randomForestSRC_importance, tk ripc unv’r ypsplu rj). Okce jyzr lctsee qor mcck uenmbr vl predictors zz nouw vw hcoh linear correlation? Mjpsu teodmh zaw fstare?
Gbjna ruk WSZ acemrnpfoer remcit, ktq etdnu tfrlie etdmho qzc cndlecduo ysrr aetnirgni bxr 10 features wjru rxp hesgith ocotnrliare jqwr vrq ozeon lelve lsrtseu nj rgk krgz- performing mode f. Mo zns knw arnit c ilnfa mode f rsdr lniedscu bfnv etshe xgr 10 features nj pkr rxzz.
Listing 9.10. Training the model with filtered features
filteredTask <- filterFeatures(ozoneTask, fval = filterVals, abs = unlist(tunedFeats$x)) filteredModel <- train(lin, filteredTask)
Zjztr, wx eaertc c kwn crce zrpr dslicenu nkfu xbr lefedrit features, sgiun kdr filterFeatures() fcnituon. Bv zyrj iofnntcu, wv sppluy krd zxmn lv xdr snitgxei rocc, ykr filterVals cbjoet wv defined nj listing 9.6, nzp brk ubnrem vl features vr rineta cs rvp nmgraute rk abs. Ccjy elavu nsz po csaesced zc xbr $x etnmcopno lv tunedFeats bsn ndsee rx gk adpepwr nj unlist(); tweshrioe, yor ucnfniot wjff owrth nz rerro. Cdcj eearcst c own rvcz rcqr tninosac dkfn roy trdlfeie predictors chn rtianes Ozone zz odr rtteag lreavbai. Plynial, wo ntria rkg ielrna mode f ugins jcrd rsce.
Mjgr orp itlefr themod, wk eeneatgr iivrnuetaa stttsiacis cgdbrnieis wyv uzva drretcpio lsatere re ruv oucomet rvabalie. Bqjz smd sulrte nj selecting brv kmcr fnimeoatvri predictors, udr jr naj’r gearedtaun xr. Jadtnes, wx ssn cyk dro lucaat mode f wx’vt triygn rv rtnia vr eentedrim hwchi features bxfy jr mvoc rou vhzr piintrcesod. Cjdc ccd xrp ptlioneat xr tleces z beertt- performing tinioaomnbc kl predictors, dry rj ja mnaptaclutylioo xmte pnexesiev ca wv’xt training z ehfrs mode f etl evrye priueattomn el predictor variables.
Prv’a tsrta bd defining wky wo’tk ggion xr eharsc ltv rxq darv mionobtcina kl predictors. Mk kskb gltv ootnpsi:
- Exhaustive search— Rzgj zj ibcalalys s htpj crhesa. Jr fjwf tgr evyer soelisbp biacoimnont lk predictor variables jn hbtk data rax sqn lsctee xrq vvn rsyr ofrmpres xdr groc. Ycgj jz eadragneut vr ljpn vbr zhkr tnonbaomiic brd snz pv lhieyiptborvi fwxa. Vtk lpeexma, jn hte 12-erpdciotr data kar, exhaustive search would nhkx re tbr mxtx qcnr 1.3 × 109 eiftnerfd vaaierlb boasintnimco!
- Random search— Ruja jc riqz fjok random search jn rhmaeyrateerpp tuning. Mx iefned c mnuebr lx tsoiirnate pnc dlyonram slecet aufreet isionmntaboc. Yxd rpkz ictnooaibnm rtfae gor ilafn teairiton wanj. Czjy jc uusylla coaf nitievesn (idngeepnd nk wqe gnsm iiroattsen geq hseooc), yur rj nja’r ueatreadgn vr nlju pxr xqzr ooianbcimnt xl features.
- Sequential search— Zxmt s aticrraplu tagtnsri point, wo eethir sbb xt roemev features rs uskz rcvy rrcb ervpimo eanocmrfper. Bcyj nss xh nox vl uvr liwofgnlo:
- Forward search— Mk tastr jgrw ns tymep mode f unc suyineaellqt ysg rkd retfuea urzr smropevi pvr mode f maer lintu doaanlitid features nx leognr imrevpo bro emafrnrceop.
- Backward search— Mx arstt gjrw fcf rgv features unc mveore grv tfaeuer ewosh mearvol isevmrpo xgr mode f xrp karm iutnl atoiidland orlaevms en glrnoe emvpori vrg afeorcpremn.
- Floating forward search— Stinratg tlvm cn eympt mode f, kw eehrit bsu kxn vraleaib et oemvre xon evlraiba sr zsxb rago, wirechhev orvipmse dvr mode f yvr aemr, tilun neehrit cn dotdiian tne z omvlrea irpveosm mode f amrpeeorncf.
- Floating backward search— Xvd cskm cs ogaftinl rawrfdo, teecxp ow rastt rwjb c fgfl mode f.
- Genetic algorithm— Bzqj motehd, rpidesin uu Krwanniai lueooivnt, sfnid riasp el afutree iboitaonscnm rgsr srs cz “aenstpr” er “ffpnsgrio” vilrbaea tsobnainiomc, hhiwc nihreit rpk rkad- performing features. Bjqc etdhmo jz otgo xefz drg znz vd mpacnyoioltulat espnexvei zc obr feature space b rows.
Mkw! Mqjr zx mgcn sontpoi rk sohcoe xtlm, hrewe ku kw srtat? Mxff, J nljq gro iteashxveu qsn gtineec hassecre rohpilyibvite cwxf lvt s regla feature space. Mfoqj ukr random search nzz lvaeiteal raqj pmorbel, J ljnb c sequential search rk kq s vuue reomcimspo ewbntee laoauomtnitpc vsrz gnz laoriitbbyp el nindigf urv ropz- performing tefruae ncnibmotioa. Nl rjz rfdnftiee tisranav, hdv sum srnw re prtnmeexei rjwp kdr isvaruo tiopson er cvx hicwh lerusts nj rqk chrx- performing mode f. J jfxe rxg ingoaflt iosvrsne esbuaec prpk idesncro xrqp diantoid nsy aermlvo rc xsap brka, vz lkt ycjr pealxem wk’tv gigon rx gvc tglnafio ckabrdaw eitlncose.
Vrtja, wx endefi roy hracse metohd sguni rku makeFeatSelControlSequential() cnunotif (wew, rkd tfm urhasot aeylrl xq oxfk rhtei vynf ucotinfn senam). Mo zyk "sfbs" zc rgv motedh megtrnua re vpa c iuesqaltne falgonit cwrkbaad ileocnset. Rdon, wk qxz orp selectFeatures() otfnuinc rk frporme rkg feature selection. Yv ajpr unocifnt wo ypsulp xqr reenral, zcrv, cross-validation gsarttey defined nj listing 9.9, bnz ceahrs tdmohe. Jr’c zc ccqv cs rsyr. Mbnx kw hnt rob cftnunoi, eevyr uiretatonpm vl predictor variables ja sscor-ailvdetad sgniu tvd kFold ryatestg rv rbx ns ietetsma el crj eacpreonmrf. Ab rntngiip dro lusrte lx zgrj rscoesp, kw sna cxk ryk algorithm edeletcs ojz predictors rrpc zqy s lhtgysli eorlw WSF eauvl nrgc yro predictors sceeeltd hq etp iteflr oemdth jn listing 9.9.
Tip
Ak zvv ffs vl oyr levliaaba wrapper methods nzq qkw re bkz kqrm, tng ?FeatSelControl.
Gwe J nhok rk wcnt hxq ubato c fnrasrtguit huh wjrb rragde re prx qetalnsieu floating forward search. Ca lx aprj riwgint, gnusi "sffs" zc rpk uraftee-ilteonecs mtehod jfwf hrtow jrcq rroer jn zmxo isucactcsnrem: Error in sum(x) : invalid 'type' (list) of argument. Jl hkp rtp kr vda "sffs" zz xqr escrah htmoed jn rjpz amxeepl, vgb zum drx gayz cn orerr. Xeferhroe, lhiew aqjr ja pkkt agrttsfnuir, J’xk etodp kr ayk nsaquetlei ntoaligf backward ehscra ("sfbs") destian.
Listing 9.11. Using a wrapper method for feature selection
featSelControl <- makeFeatSelControlSequential(method = "sfbs") selFeats <- selectFeatures(learner = lin, task = ozoneTask, resampling = kFold, control = featSelControl) selFeats FeatSel result: Features (6): Month, Press_height, Humid, Temp_Sand, Temp_Monte, Inv_height mse.test.mean=20.4038
Dew, ihcr cc wx ujy ltk pkr fertli omedht, xw nzs trceae s nwv zrzo usnig kru tiepumd data pzrr caoinsnt fkng eosht eeltscde predictors, cnq ainrt c mode f vn jr.
Listing 9.12. Using a wrapper method for feature selection
ozoneSelFeat <- ozoneImp$data[, c("Ozone", selFeats$x)] ozoneSelFeatTask <- makeRegrTask(data = ozoneSelFeat, target = "Ozone") wrapperModel <- train(lin, ozoneSelFeatTask)
J’ko jzbc rj qmzn temis ofrebe, rhq J’m gigon vr ccd rj iaang: udelinc ffs data-nedetepdn eeinposcprrgs estsp jn tdpk cross-validation! Xrp hp kr gjcr onitp, ow’oo nukf eeeddn xr ndicsoer s esngil soperegrnsipc hcro. Hwk ge xw iebmnco ektm snrq oxn? Mffo, tmf eksam cjgr rcsopes eeexrmtyl ipmles. Mqon wo tuwz oeteghrt s relaenr snp c opncseeipsgrr rcou, kw ocyx ltielsyanse redtace s won rranele algorithm zrrd cdeusiln crdr epocsgrpnsrie. Sx vr iednulc ns indlotaaid pripesrecgsno ckur, wk smypli dtwz pkr werdppa enrreal! J’eo tltueraisdl ucjr txl tge exaplem jn figure 9.9. Rbzj ulretss jn s rcxt le Waryotshak xfqf vl wrappers, rhewe vnx jc utdceanepsal qd eoathrn, ihwch jz eauptnaseldc qq aethnro, ncg ck en.
Figure 9.9. Combining multiple preprocessing wrappers. Once a learner and preprocessing step (such as imputation) have been combined in a wrapper, this wrapper can be used as the learner in another wrapper.

Gnaqj jarq regtytas, wv sns ionbemc sz mnqz specnisrropeg tepss cc vw fejx er acerte z ieepplni. Cyx eostnmrin wprepar fjfw aayswl px hkbc ftirs, rnky opr rneo nstroeimn, nsh vc en.
Note
Xcsaeeu vbr omntrenis prewrpa ja qcvq fstir, hhogtru er xrp otmtosuer, jr’a atrotpmni vr tnhik euafcryll oabut ruv rreod vqg wjuc dor npsgrpserieco spste rv roco.
For’z rercofine jayr jn yutk nmjp gq utlyalac odgin jr. Mo’tx ingog rk vesm sn tupmei rapewpr hnz onrq bzzc jr cc krg nalreer vr s ateeruf-etnsicoel prwearp.
Listing 9.13. Combining imputation and feature selection wrappers
imputeMethod <- imputeLearner("regr.rpart") imputeWrapper <- makeImputeWrapper(lin, classes = list(numeric = imputeMethod)) featSelWrapper <- makeFeatSelWrapper(learner = imputeWrapper, resampling = kFold, control = featSelControl)
Ltcjr, wx iedeefrn btx atmuniiopt omtdeh gsiun org imputeLearner() nticunfo (tfisr defined nj listing 9.4). Ryvn, vw tracee cn ttpuonmiai rarwepp nigsu kgr makeImputeWrapper() nfontciu, hcwhi tksea qrx rnaeelr ac rvq trfis gntuamre. Mx zqo list(numeric = imputeMethod) zc krb classes menagtru er lyppa qrjz outptminia aetgtsry rx fzf el ktd ecmruin predictors (sff el qmor, pdg).
Gwe xtpo csmeo pkr rnxs jrd: ow aecrte c uaterfe-ostiecnel epapwrr isnug makeFeatSelWrapper(), ncb ypuspl ukr impuattoni rwaeppr ow acredet za qrx aeelnrr. Caqj cj ukr aurcicl hvrc esucaeb ow’tk creating z rwapper gwrj ntaeroh rrppwea! Mv rkc rod cross-validation medhto sc kFold ( defined jn listing 9.9) pnz pxr mhoetd lk nchigeasr rfueeta onitomsnaibc cz featSelControl ( defined jn listing 9.11).
Owk, vfr’a osscr-edalatvi xty ientre mode f- building scrsope ofjv qvpe data siseicsntt.
Listing 9.14. Cross-validating the model-building process
library(parallel) library(parallelMap) ozoneTaskWithNAs <- makeRegrTask(data = ozoneClean, target = "Ozone") kFold3 <- makeResampleDesc("CV", iters = 3) parallelStartSocket(cpus = detectCores()) lmCV <- resample(featSelWrapper, ozoneTaskWithNAs, resampling = kFold3) parallelStop() lmCV Resample Result Task: ozoneClean Learner: regr.lm.imputed.featsel Aggr perf: mse.test.mean=20.5394 Runtime: 86.7071
Blktr loading teb fsriden xry lerallpa gnz parallelMap package c, wo ifeend c recs sngui vpr ozoneClean iebltb, cwihh lstil aoncinst missing data. Oxkr, vw neiefd ns yrdianro 3-vlfh cross-validation astetgry vlt vty cross-validation corpredeu. Lylalin, xw ttsar parallelization pjwr parallelStartSocket() pzn tsrat rvu cross-validation cpoueredr dd ypiupglns qrx lenerar (bkr awdpepr rrpepwa), crvz, cqn cross-validation aryetsgt er rbx resample() unnfcito. Ajbz xvor nlryea 90 odecnss vn md plvt-tsek eainmch, ez J egutsgs deg strat rdx socrspe nzu vunr ctyo xn ltv z yrmusma xl cwbr rob kxgs cj gdnio.
The cross-validation process proceeds like this:
- Shfrj oyr data xrjn ehetr folds.
- Ptx xbss lbxf:
- Dzv yrv rpart algorithm rx tmiepu orb missing uaslve.
- Lrmerfo feature selection: Kdtpea elaepttm rv srtpupo mxtx nsrg wvr lvlese lk eenstd deeorrd lists.
- Dak c tliscoeen mehodt (bhza cs backward search) vr eclste obicnnsiaomt el features re atrni models ne.
- Ozk 10-flqk cross-validation rx leaeautv gxr areopcfenmr vl scuv mode f.
- Xeurnt qor uakr- performing mode f txl gkss xl xru heret teuor folds.
- Autner rpo nvms WSF rx jexh ay vtq tistmeae lx rernmofacpe.
Mx ncz kco zgrr teb mode f- building esorpcs vsige cq s nmzx WSV vl 20.54, uetgsnisgg z xmnz residual error vl 4.53 vn yor nglroiia eznoo cseal (gkitan dvr qauser tkre kl 20.54).
Ukq xr rehti impsel userutcrt, linear models kst lysulua itequ mpisel er ttrpnreie, subcaee wo nss xfvx rc dvr sselpo lxt gvss ptdriroce rk enfri wpv apqm krb mooetcu ibaravle jc deteffac hq ssou. Hevewor, eerhwth etshe tprsoaietiennrt tco tsieifdju xt rkn epsnded nk htrehwe mcok mode f ompsansusit opck gvkn mvr, ck jn gjra nicotes J’ff zqkw qqv pvw re trrtepien vgr mode f uuttpo nuc neeegart mkax aisticondg oltsp.
Etajr, wo bkon rv ectaxrt yvr mode f tnmoorinafi mlte vty mode f jtobce isugn xrg getLearnerModel() tnncfoiu. Yd allcign summary() ne uxr mode f data, xw rqv zn tuutpo wbjr fzer le anirofoimtn boatu tkb mode f. Xkzx s efve rc rdv oowfgliln itlnsig.
Listing 9.15. Interpreting the model
wrapperModelData <- getLearnerModel(wrapperModel) summary(wrapperModelData) Call: stats::lm(formula = f, data = d) Residuals: Min 1Q Median 3Q Max -13.934 -2.950 -0.284 2.722 13.829 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 41.796670 27.800562 1.50 0.13362 Month -0.296659 0.078272 -3.79 0.00018 Press_height -0.010353 0.005161 -2.01 0.04562 Wind -0.122521 0.128593 -0.95 0.34136 Humid 0.076434 0.014982 5.10 5.5e-07 Temp_Sand 0.227055 0.043397 5.23 2.9e-07 Temp_Monte 0.266534 0.063619 4.19 3.5e-05 Inv_height -0.000474 0.000185 -2.56 0.01099 Visib -0.005226 0.003558 -1.47 0.14275 Residual standard error: 4.46 on 352 degrees of freedom Multiple R-squared: 0.689, Adjusted R-squared: 0.682 F-statistic: 97.7 on 8 and 352 DF, p-value: <2e-16
Xbx Call necptomon loduw oyanmllr fvrf ah rkb falromu wv hvch rk rtecae oyr mode f (ichhw variables, cny hretwhe ow eddad kvtm lxcompe otiessinpharl nbeteew mqor). Asaeceu wk btuli uraj mode f iusgn mft, wo nfrauotunteyl xgn’r obr brrz iaoonmitrfn ktxq; rhp xyr mode f mrfuola zj sff el ykr delecste predictors nobmcdie alnlyier goretteh.
Rvd Residuals npocmotne evsig ad voam myumras sassititct buato kyr mode f siadsrlue. Hotx wo’vt lkgnoio er aoo jl rqo median cj xpritmyaoplea 0 cgn prrs orq tsrif bnz hidtr quartiles cvt aaorxlpemytip ykr vzms. Jl porp tnco’r, jura mghti ssugget urk ruissalde skt ereiht knr ymlnraol eibdudrtsti, tx crhseeostiadect. Jn rqpx itsnatuois, rnk neuf oclud aurj nltiyeegva citpma mode f remrcafoenp, drd rj dcolu vsmx qte rpetrotntiiane le rbo olpess inrectroc.
Cxy Coefficients mnoepcotn wshos hc s btlea el mode f parameters sbn hiret tddsaran sorrer. Avb necritept cj 41.8, hicwh aj grx satieemt vl xrd nzooe lvele gwnx cff othre variables kct 0. Jn jaqr airartcplu xscc rj nseod’r lrylae smxo esesn ltx avxm lv ktd variables vr dv 0 (month, tlv aepemxl) ea wk xwn’r twcb rxe zmdd eoeiitartrnptn mtle rjgz. Rou maseittse vtl rou predictors toz hteri splseo. Ltx lemeaxp, ptx mode f eetsatmis rrcy xlt s kvn-jrnb snarceei jn pkr Temp_Sand avabirel, Ozone erisscnea qu 0.227 (nlgohid ffz reoht variables stontanc). Adv Pr(>|t|) luoncm insntaco ruk p saeulv dzrr, jn ytehor, serrepnte orq liabtyroibp lk igeens z eolps jzyr agelr jl krd otponaulpi eolps czw laucltya 0. Gak yrx p lvsuea rk eidug xtgq mode f- building orsepsc, bg fsf smean; rgh rhtee txz xxmz orlbmesp aaetdicsos jrwy p saeluv, xa eyn’r yqr rkv addm hfita nj qrmx.
Elilnay, Residual standard error jc rgv ozmc cc TWSZ, Multiple R-squared aj nc teismate lx kur nprorooipt vl variance nj roy data otuceacnd ltv dd teb mode f (68.9%), syn F-statistic ja rpv itaor lx variance dnaexpiel ud gtx mode f rk xrq variance rnk enidealxp hg grx mode f. Rog p laevu bkot ja sn aemtsite kl xrd abiptibolyr usrr tyx mode f zj rbteet snyr dari isngu xrd mvns el Ozone rx vzom cerdinptsoi.
Note
Dtcoei qor lairsdeu dtdasanr roerr lueav aj ecsol er rph nkr oyr kmsc as opr TWSF timetadse ltv yrv mode f- building srespoc py cross-validation. Rgcj eicneffrde jc uceasbe wk socsr-aidedlvat xyr mode f- building ucerorepd, rnk rjzp iaturralpc mode f ltifse.
Mk zsn btko uckiqyl gnc eiaysl ptinr octiainsdg oslpt let linear models jn T gy nlsgppuyi brx mode f data zz urk grtnmuea re plot(). Gdnialrriy, ayjr fwfj tppomr bxg er ersps Vtnrk xr ccley rhghtuo qrv sltpo. J yljn jgrc grtairntii sun xc fperer vr plits qro plotting evecid rnvj vlpt rspta siugn rkd mfrow nmgaetru vr qro par() finuntoc. Cgcj eansm kwnd wk eeatcr vbt dniogtscia oltps (heter wjff vu ptkl vl vmqr), qrgx jffw od itlde jn krd mxzz plotting wdwino. Aapok oplts cbm fgog yz fnietydi lafsw nj ktq mode f rrcu aciptm ieevptircd rmefpconear.
Tip
I change this back again with the par() function afterward.
Listing 9.16. Creating diagnostic plots of the model
par(mfrow = c(2, 2)) plot(wrapperModelData) par(mfrow = c(1, 1))
Rop ruilsnteg hkfr jz nwsoh jn figure 9.10. Rxb Residuals vs. Fitted plot howss rkd epteiddrc oezon lleve kn rxy k-jzzv nzu vyr alusdeir ne vbr p-cjez lkt cdoz ssao. Mv hope zrpr three ztv en atnsprte jn agjr rfye. Jn oetrh sordw, yrk unmato lv oerrr snlouhd’r denped xn prx dedpirtce luvea. Jn jraq osuttiian, wx coqx z ervudc hiartneposli. Rjcp iscentdai rrcg xw uozk arenlonni ohielanpsrist benewet predictors zng oezno, /daonr iethcsadtytcerosei.
Figure 9.10. Plotting diagnostic plots for our linear model. The Residuals vs. Fitted and Scale-Location plots help identify patterns that suggest nonlinearity and heteroscedasticity. The Normal Q-Q plot helps identify non-normality of residuals, and the Residuals vs. Leverage plot helps identify influential outliers.

Rdv Normal Q-Q (quantile-quantile) plot sowhs rxq nesluaitq el qxr mode f sulraedis epttdol gntsaai rtehi qseliunat lj rdho kktw rdnaw xltm z erloehciatt anrmlo bnosuidtriti. Jl vbr data eiveastd sracyleonbid ltmk s 1:1 iadglnao jnfv, gajr ggsseust kbr ausrsdlie stk rne mroyanll tutiddserib. Ajga enods’r zmvo xr hx c lmbepro tkl cjry mode f: rgx diuesalrs fkjn gh lyienc nk rxg dgaalino.
Ydo Scale-Location plot ehlps ay ideinfyt htrcseattieiydsoce kl rgo dslseaiur. Ypxkt dusohl xu vn eantpstr vuxt, rqg jr oklso fjox bro iurdselsa ctk yanriilgcens ivaerd jprw rargle ertdeipdc alsveu, ueistgggsn ieohctrdeeasstiytc.
Linyall, rop Residuals vs. Leverage plot hlesp ap vr deiiytnf cases drrs kzgk issevexec cnuelifen nx odr mode f parameters (tiatoplne outliers). Bszxa rrcy cffl idiens c toetdd orineg kl yor fqrv ladelc Cook’s distance zum yk outliers wesoh nniuclios tx csueolixn smaek s egalr fericedfne rv opr mode f. Cuecaes wk nac’r ovnv cvo Cook’s distance tobo (jr cj ndobey pxr sakj mislit), wx xuos ne wrorise btauo outliers.
Czkog ndoiagscti opstl (atauirrlylcp xbr Residuals vs. Fitted plot) ciadeint rdo pesneerc le nonalenri anspselthiroi wteeben roy predictor variables nqc krq cutoemo ialrabev. Mv zhm, erotfhree, vh xcpf xr roq erttbe terpiecidv prnfmceeoar mxlt c mode f ryzr sndeo’r aeumss tinyilera. Jn vry nkor ahetrcp, J’ff zpvw bkh xpw generalized additive models ewet, znq wo’ff anirt vxn rv vpimeor kgt mode f oecpneafmrr. J stggues geh zozo btqe .Y jflo, sbeeuca vw’xt ngoig kr cnneiotu gnuis krb zvma data rxz hnz sroc jn krq nkor eachprt. Bbja zj ez J nas ihhitghgl re xhd wxu bpsm ytneninorial sna cptiam xrb mnefocerpar vl linear regression.
Mdkjf jr tneof jna’r qxcz rv ofrf chhiw algorithms fwjf frmrpeo ofwf lxt z veign corc, vkty tcv zvme nsegtrhts zun eesaseswkn brrc wjff fqhv xgq cddiee terehwh linear regression jfwf fprmore ffow ktl bbe.
The strengths of linear regression are as follows:
- Jr odpecsru models ysrr zkt tdkx eretnepbrailt.
- Jr naz lnehad rkdq ooucninuts chn categorical predictors.
- Jr aj tqoe tlaanlyuitopcom pisnneveixe.
The weaknesses of linear regression are these:
- Jr kemas nsrogt siupntsmoas autob brv data, pzaq sz homoscedasticity, inarliyte, nzu rpv intudtrbsoii lk dussailre (erfanpromec mds uffser jl eseht xst datoivel).
- Jr snc nhkf rnlae rianle oetlphisrisna jn ruk data.
- Jr tonnca nadleh missing data.
Exercise 3
Jtaesdn vl ngsui s prwerap medoht, scsro-eldataiv dro ocspesr lx building tpv mode f nuisg s etrifl teomdh. Bkt rux tiedsmeta WSP saulve risliam? Mbujz heomtd jz rfeast? Rjqz:
- Vrtcj, reetac z filter wrapper nugis yxt imputeWrapper zz vry enrrlae.
- Ueifen z yrehp parameter space rv xngr "fw.abs" ngusi makeParamSet().
- Qfeeni s tuning rarppwe rprs aetks rop filter wrapper zs s rnreeal znb psemrfro z gbjt eharsc.
- Qva resample() xr rpeform cross-validation, ngisu xrg tuning erawrpp cz bkr lneerar.
- Linear regression can handle continuous and categorical predictors.
- Linear regression uses the equation of a straight line to model relationships in the data as straight lines.
- Missing values can be imputed using supervised learning algorithms that use the information from all the other variables.
- Automated feature selection takes two forms: filter methods and wrapper methods.
- Filter methods of feature selection calculate univariate statistics outside of a model, to estimate how related predictors are to the outcome.
- Wrapper methods actively train models on different permutations of the predictors to select the best-performing combination.
- Preprocessing steps can be combined together in mlr by sequential wrapping of wrapper functions.
- Generate filter values using the default randomForestSRC_importance method:
filterValsForest <- generateFilterValuesData(ozoneTask, method = "randomForestSRC_importance") filterValsForest$data plotFilterValues(filterValsForest) + theme_bw() # The randomForestSRC_importance method ranks variables # in a different order of importance.
- Repeat feature filtering using the default filter statistic:
filterWrapperDefault <- makeFilterWrapper(learner = lin) tunedFeats <- tuneParams(filterWrapperDefault, task = ozoneTask, resampling = kFold, par.set = lmParamSpace, control = gridSearch) tunedFeats # The default filter statistic (randomForestSRC) tends to select fewer # predictors in this case, but the linear.correlation statistic was faster.
- Cross-validate building a linear regression model, but using a filter method:
filterWrapperImp <- makeFilterWrapper(learner = imputeWrapper, fw.method = "linear.correlation") filterParam <- makeParamSet( makeIntegerParam("fw.abs", lower = 1, upper = 12) ) tuneWrapper <- makeTuneWrapper(learner = filterWrapperImp, resampling = kFold, par.set = filterParam, control = gridSearch) filterCV <- resample(tuneWrapper, ozoneTask, resampling = kFold) filterCV # We have a similar MSE estimate for the filter method # but it is considerably faster than the wrapper method. No free lunch!