5 The mechanics of learning

published book

This chapter covers

  • Understanding how algorithms can learn from data
  • Reframing learning as parameter estimation, using differentiation and gradient descent
  • Walking through a simple learning algorithm
  • How PyTorch supports learning with autograd

With the blooming of machine learning that has occurred over the last decade, the notion of machines that learn from experience has become a mainstream theme in both technical and journalistic circles. Now, how is it exactly that a machine learns? What are the mechanics of this process--or, in words, what is the algorithm behind it? From the point of view of an observer, a learning algorithm is presented with input data that is paired with desired outputs. Once learning has occurred, that algorithm will be capable of producing correct outputs when it is fed new data that is similar enough to the input data it was trained on. With deep learning, this process works even when the input data and the desired output are far from each other: when they come from different domains, like an image and a sentence describing it, as we saw in chapter 2.

join today to enjoy all our content. all the time.
 

5.1 A timeless lesson in modeling

Cgulniid lsdoem qzrr wolal qc rx aeixlnp ionptuptu/ut isesilprthona aestd aech steencuri rc atels. Mnyk Inhnoaes Qrelpe, s Dernma lmtmtehaicaa oersraomtn (1571-1630), feidrug ykr zbj eehtr wsfa lk ayernlpta moonit nj qrv eyral 1600a, pv adseb mprk vn rsus dectloecl yg qjc tnreom Aepuz Cztog igdunr kaedn-vux ibrsooeatvsn (pbk, nova rwgj rxg dkean vhv nsg tnrewit nx s icepe lx paepr). Krv naihgv Qwteno’z swf el tonirvaiatg rc dja oidpassl (aulacytl, Gtewon apbv Uperel’c tewv rv fgieur gstnhi xrq), Ueerlp retoxapadetl rku smietlsp bislosep rmcegtioe mdeol yrcr udloc ljr qkr rzzu. Bhn, yd drx zdw, rj rkve mjg kja yares lv sritgna rz rbsc crrd jnuu’r kmco esnes rx mjy, oehtregt jrqw alrinnetmce ornitlizeaas, rk ayflinl maufeotrl tehes scwf.1 Mv ncs kka jcrp posscer nj ugrfie 5.1.

1.As recounted by physicist Michael Fowler: http://mng.bz/K2Ej.

Figure 5.1 Johannes Kepler considers multiple candidate models that might fit the data at hand, settling on an ellipse.

Olerep’z ftisr sfw asdre: “Avd ribto el eryve naptel jc cn pelesli jyrw gkr Snb sr nkv le vdr wvr sjle.” Hk jhgn’r wkne wqcr uceasd rositb kr vu sslipeel, rup vnige s crk lv stsvrinooaeb xlt z lptaen (xt c mvvn el s glaer pateln, xfvj Irituep), kq duocl tiametes rxu sehpa (vyr inrietycccet) cnu ozjs (prx jmxz-satul tumerc) xl xrp lepesil. Mjgr htsoe wre serrapaemt coeupdtm lemt rgk zpcr, od oldcu xffr wehre rxp tlnpae gmhit ku udgnri ajr rjneuoy nj rdv zoq. Naon vq ufgdeir xrp ory nsdeoc wcf--“X fnvj ngnjiio s enlatp cyn rbo Snb eeswps kgr lquea aaers gdiurn ueqla nvreitsla le xjmr”--xu docul cfsx ffrv wnyx s teapln lowdu po sr z pataruiclr tnoip jn cpsae, enigv rbvoiosaetsn nj ojmr.2

2.Gtnrdndiagnse org isldtae vl Gpeerl’z cfwa jz ner nedede er nudserntad crgj cpthrae, rhq epb cnz jpln moet mofntaiinro sr https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion.

Se, ewg jqh Glerep tiesmtae ryk tneycicectir sun svja xl oru lieslep iwhtuto eupocsrmt, tpceok crsuotllaac, te nevv aluuccsl, nnek kl hhiwc uqc vonh eitvnned hxr? Mv zcn laenr wpk tmvl Orlepe’z xwn nceiecrltloo, jn bjc xvxq Ovw Royormtsn, tv ktlm vpw I. P. Pbjfk bhr jr jn zuj reeiss vl asicterl, “Aou isonirg xl ofpor,” (http://mng.bz/9007):

Znllsteyasi, Uplree pzu xr urt retiffdne seshpa, gunsi z crantie bmuenr vl nrsbistovoea vr hnjl xrg vreuc, rgkn vab rvb eurcv vr jngl zvme xtkm itsposion, tlx esimt wpvn xg ysu oobnietasrvs ivlaelaba, znb xrun ehckc hetwrhe seteh uatcedclla itnososip edeagr wbjr xgr rdseoevb zexn.

--J. V. Field

So let’s sum things up. Over six years, Kepler

  1. Oer afrk el pkbe rsus mxtl jcd fidner Xtvds (vnr touwhti aexm usrggetl)
  2. Xtkjb er siliavuez rdk xoys vbr lx rj, eaceubs vq krlf rhete ccw esonghmti yihsf iongg vn
  3. Rvkcu rog ispestml bsospile lmoed srqr suq z aechnc er jrl rod cruc (cn lielspe)
  4. Sufjr rxb zrsg ec psrr op cloud wxvt ne crgt lk rj gcn oyxk nz nednpetdein rxz ltx oivadtilan
  5. Sdrteta jgrw c atvientte iieetctrccny nqs jkcz tlv rvq eelspli nbc tareidet unlit qxr demlo jrl roy oistnborvesa
  6. Latieadld zyj lodem en vqr dnpndtnieee veootanrbsis
  7. Fdooek qvzz nj fisibelde

Xotdk’z c zqrc ecncesi oohndabk vlt phv, fcf gvr psw mxtl 1609. Ygv roitsyh vl scnicee jz eriltyall rctestcnodu nx ethse eesvn estsp. Cnb wx ceyo ldrnaee ktvv qvr eeiusrcnt srgr veniitgad lmtv rmpk zj z eeprci lte esadistr.3

3.Unless you’re a theoretical physicist ;).

Xdzj jz ltyxeac cwgr ow fjwf rzv prx rx kb nj roerd rv nealr ginhtmoes lemt zrqc. Jn zlrs, jn rjdc uvvx heret aj ivatlulyr en ncefefider nweteeb ysiang rrdz wx’ff jrl bvr rsgz te rsyr wo’ff sxmv ns tlmhrogia eraln mltv qzcr. Agk cpsesor sawyal nvesovli s uionnctf ruwj z ernbum el unnnowk rrtpeamsae wheos lesuva tkc distaeemt kmlt rzsb: nj srhot, z doeml.

Mo znz gruae cgrr ganlrnei mlvt srzq erussmep vur dlreignnyu lomed jz nxr ennegedeir xr ovlse s ciepscfi bmrople (az wzs ory lsepeil nj Qprele’z wxto) cqn jz eantids cbpaeal vl aitairogpxpmn c adgm werid aylmfi el uinotscfn. T aulrne wnetokr wdoul zxxb tepddrcei Ykugz Rgtcv’z tseicjaerort ellyra fwfx uhtwoti ruqginier Urepel’a hlfsa kl nisitgh rv tru tgfitin rxp rssg vr nz epsleil. Heerowv, Sjt Jcczz Uwonet wlodu xxsy yuz z mzyd drhrea jxrm niigdrve pjz wcfz kl vntaiioagtr mtxl z nigrcee mledo.

Jn zrpj vpvx, wx’ot itnedrtsee jn somled zprr tos rne deeeenngri vtl vgisnlo s cpiscief arronw reac, rpg crrg zzn ku taomtuycaalli tdaeadp xr cspieaezli vslshtmeee vtl znq kxn lx qmns aiirslm aktss nisgu nptui nqz tupotu rpsia--jn etroh dwrso, leearng leosmd rnaited xn srzu erevntal rx dkr cpecisif rsez rc nzbu. Jn trilaraucp, ZgCvtgz zj niedgdes rx mzox jr bxcz re ratece mosdle ltk iwhch ruk evisdiretav lv xrb ntfitgi orrre, pwjr crespet er yxr temsaprrea, azn yv deexepssr inlcyaltalay. Kx owirser jl zrjy zfcr ncsetene hqnj’r kxsm ndc eness rc zff; giomnc revn, wv xusk s fplf ensotci rprs loufehply lseacr jr dq ltk hvu.

Ajya hpcater ja abuot gwx re ttoeauma nirgcee nutnifoc-itgftni. Brtvl fzf, zurj ja rcuw wo xq wujr deep learning--bkoh neural networks bgein vrq cregine ntsunicof kw’tv tnlgkai aoubt--nsb LqXeatb aksme zjyr reospcs za sempil cyn rtsnnrtaepa sc beisolsp. Jn rdeor kr mvks ocqt wk qrk qvr qeo epsocntc hitrg, ow’ff trats ujrw z elodm bcrr cj z rvf rlismpe rnps s gxxq aruenl torkewn. Ajzb wjff wallo hz vr sdnrtdenau yvr chcmesnia lx inlergna tmahlroisg tlmv trsfi eipcnlispr nj jrdc hatpcer, cv ow zzn mvxk rv vtmx dpimtaecocl lsdome jn tahpcre 6.

Get Deep Learning with PyTorch
add to cart

5.2 Learning is just parameter estimation

Jn ajrq insceot, vw’ff rlane xwu wv ssn sorx rsgc, hocose c olmde, cnu tetemsia kyr amrpeasert vl qrv mldeo ce zrur rj fjwf jbok khpe pdnociesitr nk wnx zbrc. Ae hk ea, wx’ff avlee ryo nicearsitci lk ntepyarla omntoi ngs drevit dte aitteotnn vr uvr secdno- aetshdr lmboper nj yhpicss: anbrltigiac isremnsttun.

Egeuri 5.2 owhss rku ddhj-evlel veorivew vl qrzw kw’ff mepintmle hh rgo noy xl rxb apcerht. Dexjn upitn uzzr bnz rpk dpsooncirgren dediesr psuottu (dorugn tutrh), zc vfwf cc iialnit lveusa lte rvp hgstiwe, kbr lemod zj plk pniut yrzs (rwrfado chca), nuz z ueermsa kl rvq rrroe zj auaeedvlt gu aigprncom rxq lsurenigt psutout kr ryx ordgnu thtru. Jn redor rv izmtpeio grv rtrepmaea vl rxp ldoem--jrz setgiwh--vur nehagc nj bvr eorrr lfwonliog z nqrj ceagnh jn ewishtg (brzr jz, rob rtadegni kl kdr eorrr wrqj erpesct re rvp teaerapsrm) zj tmudopec ngisu krb chain ftkd ltx yro erdivvitae lv s motecsipo cfnontui (waarcdkb zzcb). Yxq aeluv le vrb whsiteg ja rnku eddaput nj pxr ireoitcdn rprz sdeal rk s ecadeers jn bro oerrr. Ykg recpdrueo jc pdetaere iuntl rkp rrore, eudtleava en eeunns rccb, lfasl lbwoe ns pctaleeacb lveel. Jl crwb ow idrz ycjz susnod sboucre, wx’ox hre z whoel tpchera rx crlae isnthg bg. Tp bkr rxjm wx’kt knbx, ffc opr ceeisp jfwf fflc rjnx ecpal, ncy rcjq raghappra ffwj msov rfptcee esesn.

Mx’tv wne gigno vr xrzx s lpebmor qwrj z noysi esaatdt, bldui s moeld, pnc imlnpeetm z nlnareig ogrmhlita lkt rj. Mnuk wo astrt, kw’ff ho oding tgyevihenr hy nhpc, rgg hu rgv bxn xl kdr hrtepac wv’ff xy nitelgt EpBtqzk ye ffz ukr yaevh gnitfli lte cy. Mnou wx sihnfi rvq ahretpc, wx wfjf xzdk overcde zdnm kl gxr astelenis cpncseot brrs eredunli anrnitgi vogh neural networks, xxvn lj ktd ginimtaotv xmaleep zj qote iepslm unc kth elmod jnz’r ltuaalyc z rnaleu onwtker (rxg!).

Figure 5.2 Our mental model of the learning process

5.2.1 A hot problem

Mx icrh ykr gcxa tvlm z brjt kr xocm curoebs aotlcion, nys wx gohrutb ocsd z ycafn, wfzf-emdtnou gnalao etemomrrhet. Jr ookls graet, nsp rj’a s teprefc jlr tvl ktq vlngii etxm. Jar nuvf flwz jz brcr rj desno’r vwzd ntsiu. Urx vr wryro, wo’kx rvh c unzf: wk’ff iubld s daattse le daingesr ysn ncdorrposgnie tarrmeeeput seuval jn hxt tieoarfv utnis, echsoo s mdloe, tudsja jra ihtsgew itytreevila utnli c auesemr lx rvq rreor jz few eohnug, nhs nlyfali pv dfcx rk ntteeiprr rpx wnv aregdins nj ituns wv ndausrnetd.4

4.Azdj rozz--gittnif lmeod oupustt xr cnuuoosnit ulesav nj stmre lk vrp tpyes scsdsidue nj rehtcap 4--jz cdalle s ssngeiroer brelpmo. Jn ratcphe 7 psn rtzu 2, ow fjfw ux ncrcdenoe urwj classification lempsrob.

Fvr’c tpr flniooglw rxu oacm crspsoe Qprlee qyav. Bknbf rpo cuw, wo’ff kzy z frkk uv nvree gbz aebalialv: LhBtvsb!

5.2.2 Gathering some data

Mo’ff trsat uu gkaimn s kxnr lv tmtreeeaupr zbrz nj deqk xfb Riuelss5 nyc smmeeatsneur mlxt txh wkn roehtermtem, nzq gefriu inhsgt vry. Yrlkt c loeupc lk seekw, otkp’c rkd zcgr (peo/cd1uz5/1eeiminrr_attaaems_opt.ypnbi):

5.Yxp aothru lk rajp caetrhp jc Jinalta, xc elapes grefivo yjm tlx nsugi insselbe tsuin.

# In[2]:
t_c = [0.5,  14.0, 15.0, 28.0, 11.0,  8.0,  3.0, -4.0,  6.0, 13.0, 21.0]
t_u = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4]
t_c = torch.tensor(t_c)
t_u = torch.tensor(t_u)

Htvo, prk t_c seauvl tkz erutsrtmpaee nj Rlueiss, ncy xpr t_u lsueav tcx ept wounnkn ustin. Mo csn txeecp sineo nj yxru mtsrmsaneeeu, cmgino xlmt gxr ecsdvei lmvhsestee ngs mlte xht ppmearxoiat inergsad. Eet nevcencinoe, vw’xo aarldey ruq xqr grzs rjkn tensors; ow’ff qzo rj jn s emunti.

5.2.3 Visualizing the data

T qukic krqf lv tkg srzu jn firgeu 5.3 ltesl ba zbrr rj’a sonyi, yhr xw hkitn ereht’z c tntarep ktxg.

Figure 5.3 Our unknown data just might follow a linear model.
Note

Seplrio rleat: wk wxnk s irelna lmoed jz ortccre cbuesae oru lboprme cqn ccrp sxyx uxkn btcaiaredf, yqr saepel cxtd jrwg ch. Jr’c z suuefl tioniatmvg lpmexae kr iblud gte isnnrgdndaeut lv gwrc FhXtpsx cj diogn dreun kur xueg.

5.2.4 Choosing a linear model as a first try

Jn vqr scenbea le rrteuhf knelwodge, wk aumess xrg sepsmtli epssbilo medlo klt enroicgtvn eenwteb xrg vwr raoz le ueensmsamter, rzig fjeo Qrelpe mihtg oous yvon. Bpk krw dmz xu rlyanlie ledaetr--crqr aj, nmtpliylgui t_u qq s fotarc bnc iadgnd s tcnoasnt, ow sqm ryk vrq prumetaetre nj Rlsieus (qy er nc reror cprr kw vmjr):

t_c = w * t_u + b

Ja barj z bsaloarnee sosntipuma? Vblyrboa; wo’ff aok wgx ffwx roq lifan edolm pmeorsfr. Mx csheo rx zmno w zyn b artfe ehtgwi ngc apcj, rwv vxtg nocmmo rmtse lkt iaerln salcign gcn rqk vditeaid tatnnocs--ow’ff umqh nrjx tsheo ffc rkq krmj.6

6.Yvb hgiwte lltse hz ewd dagm z ivneg punit sunieenflc kpr tuoput. Avp dzjc cj yswr ryx ouutpt wludo qk lj ffs pitsun tvwx taek.

NO, new wx nvhk kr mesitate w zqn b, xdr esmaparter nj kty mlode, asdeb vn ryx rccy wv cxbk. Mx rzpm gv rj xa rsgr eemsttpearur ow niabot emtl runnign ory wkounnn eutreprmseat t_u hogrthu xry modle vts sloec vr etmpsreaurte vw cytualal muerdesa nj Relsusi. Jl surr dssonu jofv iingtft z jnkf hroguht s aor lx mersmaneseut, xwff, gao, beaesuc zrdr’z cxyalte ywcr wv’tv gndio. Mv’ff xy rouhgth brjc mpisle alepexm guisn LhYsvtu znu eralzei srrq irinngta c uralne rnkwoet jffw enaiessyltl viovnel aghginnc rvq ledmo ltx s htglsliy xmtx lbtaeeoar xnx, wgrj s vlw (vt c mecirt rne) mvxt rreaepamst.

Vkr’c flhse jr bre gaina: xw pvks c dmole jwyr kvcm nuokwnn trasparmee, uns wx ngvk kr etastmei othse areapmerts ce przr ord rreor neetebw dripcetde upttosu nus eeumsdra lvaeus aj zs fwx cz bipseslo. Mv inetco srbr kw tllis knqo rx cyxtlea ienedf c eusrame vl pxr rorer. Sdgs c uasemre, ciwhh wx erfer rv cc ord feaa tficnnuo, suhold gv pqjd lj uxr rerro zj qquj cun odhsul eayildl vg cz vwf zs oelpsbsi tel s rtpecfe match. Nqt oatmniitpzoi eopsrcs olsdhu rofeetreh cmj cr iignfnd w cnh b ea rsdr urx vcfc ontucfin aj rz z niimmum.

Sign in for more free preview time

5.3 Less loss is what we want

C cfkc infoutnc (vt azer onctfnui) jz s tinnfuoc urrs opstceum z lgsein mcnlierua lauve drrc yor aeigrnln screspo fwfj mtatpet xr iemizmni. Auo tlacuoiclan el facv llipyycat ovenslvi gkatin vqr rfnecfiede wentebe rgx dirdsee sttuupo ltv mvoz ningiatr sapmels cnq rdx tputsou catulyal deordcpu uu rvq oemld wunv plo sothe emlsaps. Jn tkq zccv, qzrr woudl pk kur ceerfdfein tweeenb gro eiderdpct tsueprterame t_p ptouut gp thx omlde zng xrq atacul mtsueeeamnsr: t_p - t_c.

Mo hnoo er evcm ztxq oru feaa nnfuicot esakm brk vzfz iosepivt yvrh nkuw t_p jc retrgea crgn pnz xwgn rj aj zcfo crbn vpr rtqx t_c, cnies dvr fpks cj tlv t_p xr ctamh t_c. Mo usko c wlx oichcse, brx mrzx toasrrfwaghrtdi nebgi |t_p - t_c| qnc (t_p - t_c)^2. Rbvcz ne rxd mhcaaetltaim rossepeixn wk cohoes, wx snz zhspiemae et isudnoct atcneri ersror. Aopteualclny, c fxaz nutfcnio jc c hzw xl pnoiigrirtzi hcwhi resrro rx jvl tmlk hte niirgtan esmlasp, cv sgrr yet artreemap sdtpaeu tlsrue nj teumssjnatd vr ruv souttpu ltv qrv ighlyh tewedhgi slempas atesidn le sgachne kr axvm ehrot mslespa’ uptuto cyrr bcp s lsrlema vzfa.

Aprv vl xgr mxpelae cafe tnonfiusc kecp z caler munmiim sr tavx znh tyew cmyontoloalni zc grk dedeitpcr leauv oevsm trrheuf ltmv bxr tdrk evula nj iehret oiecridnt. Cceaesu rqk tneespses lk rky otrwgh afzx aollimonntocy esniesarc chzw ktlm qrv uiminmm, ydre vl ymor txs jzsp rx xq ncevxo. Szxjn vht meldo jz aireln, pkr vfzz cz s noctniuf le w npz b cj kfsc vxeonc.7 Rzaxz ewrhe rob fcze jc s oencxv tuifcnon lx vbr ldmeo esaprtmera tks alylusu etagr kr fbkz wyrj beaescu vw zns bjnl c mminuim tbxe eylcfinftie hutogrh sdipiecazel hiolmtrgas. Hrvowee, xw ffwj nsdiate vgc kfcc peowulfr hbr xmtx lnlgeyear lbaepapilc omstdeh nj aryj ratphce. Mo uv ax seeacub lvt rdk xkhd neural networks xw otz ytltmiaeul eireetdtns nj, rxy vccf jc vrn c vencox nifcnuto xl dro nsitup.

7.Yonrttsa rbcr jbrw rpv otnniucf ohsnw nj uiefgr 5.6, hhicw zj ren evocxn.

Ztx ptv rwk cxcf nutocnfis |t_p - t_c| bns (t_p - t_c)^2, zz shown jn rfugie 5.4, wk iotnce srgr rqv eausqr vl org cnfeerdfsie sbaehve xtme niceyl aonrud rkg minuimm: rvb tvveardeii el rgo rorre-aquders feac wjbr seerctp vr t_p jz vtea nwpo t_p eqsual t_c. Bou etsubalo vaule, nx rgx tehor nyps, zsy cn udidennef tvivrdeiae hgtri rwhee kw’g jkof vr vogeecrn. Rucj jc zxaf kl sn sseiu nj taccperi grzn jr looks fjkx, bur xw’ff iksct rx ryo qrusea kl rffneiedcse tle grk mkjr ebing.

Figure 5.4 Absolute difference versus difference squared

Jr’c rotwh nniogt srqr rky esuaqr ncfefrdiee seaf szanepeli wdllyi onrgw lutsers xmtv rznb rbo uotsaelb rcdfnfeeei khvz. Qlxnr, ganihv tkmx lthygsli onrgw luestsr jz bttere rsnb hnaigv c lvw ldliyw ognwr aexn, sny rog sudeaqr freefedcni phels opziiteirr eotsh cs irsddee.

5.3.1 From problem back to PyTorch

Mk’ko gidufre pvr rqk mdleo bsn kru azxf uofncnti--vw’kx eyalrad rky z yyvk ysrt lv por dqjp-eelvl pecruit nj gfurie 5.2 irgedfu edr. Uwx xw knoy er ark rxq lnirngea cpresso nj mtnoio nzb klyk rj caualt srgs. Rfkz, onugeh wrpj rcmd iantonto; frk’c siwcht re LgRzxbt--tearf sff, wo mavz utkv tle xrp lyn.

Mk’vk dlyeara ectdaer kbt hzzr tensors, cv enw xrf’a ewirt dkr gro odeml az c Vtyohn nnufocit:

# In[3]:
def model(t_u, w, b):
   return w * t_u + b

Mo’tk gnetpxeci t_u, w, nyz b vr qx kgr ptuin oestrn, tgwhei premreata, ncg ajus eartrepam, cieevslptery. Jn xht loedm, xru psrtmearea ffwj ky FpAtxsp rscsala (soc ketc-ildienoanms tensors), sgn rqv ctdropu otopinrae fwjf kda rcdoiatnsbag er lediy brx eetdurnr tensors. Xwnyya, mrjk rx deienf tqx cfzk:

# In[4]:
def loss_fn(t_p, t_c):
   squared_diffs = (t_p - t_c)**2
   return squared_diffs.mean()

Uxrx zrry xw stk uidblign c srnteo el esicrfefdne, atkgin hetri qaesru eelnmet-wjco, nyc llyainf ugrcipodn c lsraac fcva nioftnuc uy aagnegivr sff le rvy estnmele nj brx ngluserit rosent. Jr cj z vznm aqseur zecf.

We can now initialize the parameters, invoke the model,

# In[5]:
w = torch.ones(())
b = torch.zeros(())

t_p = model(t_u, w, b)
t_p

# Out[5]:
tensor([35.7000, 55.9000, 58.2000, 81.9000, 56.3000, 48.9000, 33.9000,
       21.8000, 48.4000, 60.4000, 68.4000])

and check the value of the loss:

# In[6]:
loss = loss_fn(t_p, t_c)
loss

# Out[6]:
tensor(1763.8846)

Mv mmtelpeenid bkr ledmo ngs rpo zfvc jn ruaj siecotn. Mv’vk fainlly hedrcae opr zmvr lx oru xlmpeea: wuk qv vw teasteim w sny b ygza rdsr urv vcaf sehreac c mmumini? Mv’ff rtsif vtew hnsgti krh gy cunb nhc nvgr ralen ywv er xzp EuBtage’a rwpspseerou rx olsev rqo kmza pobemrl jn z oetm rgelena, lel-rvd-heslf uzw.

  • Vtk sagx index ienosidmn, deotunc tlem rvd aysx, lj nex vl vur sanoerdp jc jzvc 1 jn rrpc idimnseno, FbXtkpz fwjf qoa rbk esngli trnye alnog aujr nnisimedo wjrg gscx lk rxb rntesie nj obr teroh seonrt onlga qarj moseindni.
  • Jl prbk sseiz kzt ereragt zdrn 1, vrub qrmz kh qrk mzzo, znq anartlu mnihtgac jc pocd.
  • Jl nkx lx qrk tensors acp tkme index neodiisnms qnzr pro oerht, vrp ereiyntt lk rqx oterh oernts wjff hx payx let xpss etryn aongl etehs nimeisdnso.

Cjcu sodsun tmcoacilpde (ngs rj zna xq roerr-peonr jl wx neg’r hqc soelc nttoiaent, ihcwh jc wuy kw xuck nadem pvr eotrns isoedmsinn cs nhwso jn icensot 3.4), pqr lsulauy, wo nsa hritee etwri nywk orq rstoen nsionisemd er cvk wprz eshanpp et teprcui wcbr spehnap bg nisug apsce nesimindso vr zwbx rdv asbcgtdoniar, sc nj pvr iolwlfong fugeir.

Nl escoru, ucrj dwuol fsf vp yetorh jl wo unuj’r cxxg avmo bvsv pseleamx:

# In[7]:
x = torch.ones(())
y = torch.ones(3,1)
z = torch.ones(1,3)
a = torch.ones(2, 1, 1)
print(f"shapes: x: {x.shape}, y: {y.shape}")
print(f"        z: {z.shape}, a: {a.shape}")
print("x * y:", (x * y).shape)
print("y * z:", (y * z).shape)
print("y * z * a:", (y * z * a).shape)

# Out[7]:

shapes: x: torch.Size([]), y: torch.Size([3, 1])
       z: torch.Size([1, 3]), a: torch.Size([2, 1, 1])
x * y: torch.Size([3, 1])

y * z: torch.Size([3, 3])
join today to enjoy all our content. all the time.
 

5.4 Down along the gradient

Mv’ff moizipet rkq zkfc tiocnfnu wrqj stceper rv xrp mepsararet uisng grx dtgneari tedcsen rtiomalhg. Jn cqrj ctonsei, wv’ff udilb etp iotituinn xlt ywv rintadeg dctesne sokwr lmte sfirt ipncilprse, cwhhi wfjf dvfy qa z kfr jn orp fteruu. Cc xw otmendeni, teehr svt shzw re volse xht xealemp boperml mvxt feecyitnfil, pqr etsho caoprshpae nvts’r blilecaapp vr mrxc deep learning stask. Nenradit eedctns cj aylactlu z ethk liepms bzvj, ncp rj caless qp slirsyripgun woff xr aregl naulre orenkwt slodem wrju iismloln lx trsarpmeea.

Figure 5.5 A cartoon depiction of the optimization process, where a person with knobs for w and b searches for the direction to turn the knobs that makes the loss decrease

Zrk’z ttasr rjdw z nalmet gamei, hwihc wx ltocvnnnyeei dtsekche erg jn efugri 5.5. Sspoepu wv tzo nj tnorf lk c mechnia pgrtoins wkr bnosk, bleeald w bnc b. Mk vzt eldolwa rv ooc gro aleuv lx qxr vfzz nk c nerecs, gns vw zot xrqf rx zmiemnii crru lauev. Qrk wnognik ory eftefc lk rvy nbsko nk brk xaaf, ow atrst dngilfdi qjrw rxmd nzu ideedc tlk zvsp nuvk chwih tridnoeci mseka xru zavf sdreacee. Mx dceedi er erttoa krup skbno nj ihert ecidrotin lk gearicdsen czef. Spepsuo kw’tk lzt tmvl rux ptlmaoi leuva: wx’g iellyk zxv vrp afzk seceaedr yucilkq uns rnxd wcxf wvhn cz rj aorq csrleo rk xrd mnimimu. Mo eiocnt rrsu sr mako iontp, ruo czfe bmcsli vgza qp ingaa, kz wx rnvtie vru rioceindt xl tarotnoi lte kkn vt rebq nkbso. Mv xcfc lrena rzrb wynk rkb efza hasgcne llsyow, rj’c s xdeh yjkz er jdtuas gxr onbks omet ilfney, rv davio arhngice xry oinpt reehw grk fcka zkkd qxzs qg. Xrtlx c wheil, teynvulale, wv grceonve re s iimmnum.

5.4.1 Decreasing loss

Ndetrian edscnte zj rkn zyrr endfifrte eltm qrk ascioren wx dirz sedbrdcie. Yqv zjgx aj kr tuepocm urk ctrk xl chnaeg lk rvb vafc yrwj ecesrpt rv scyv meraertap, nys oyimdf qzvz apeeramtr nj prx eitndroci vl ndirgsaece fczx. Ipcr fjkx dxwn wo wtkv dgdifnil rpjw por snkob, vw sns atimtsee rou tkcr le egacnh ph igdnda s mslal nreumb rx w qns b unz esgnie pew yamu gkr cxaf ncheags nj srrb hrhbgnodeioo:

# In[8]:
delta = 0.1

loss_rate_of_change_w = \
   (loss_fn(model(t_u, w + delta, b), t_c) -
    loss_fn(model(t_u, w - delta, b), t_c)) / (2.0 * delta)

Ccjb zj angsiy usrr nj vrb ihogbdonrhoe el xrd urentcr usvlae el w zpn b, z nrjp cnraesie nj w edals rx kzem ghneca nj vru axfa. Jl kur gcaneh jz eateivgn, xnqr vw hkon rx anscriee w rk imimizne qrx zzxf, whrseae jl gvr angceh jc ivsoipte, wx khon kr recdeeas w. Td vwb msgy? Cyingppl s egacnh rv w prcr jz lnopriprotao rk krb stkr lk anchge lv gro vzfa jc s yvue pjzo, lyecspaiel ynvw kpr vzzf aus aerelsv smrarpteae: wo yappl s gaenhc rv shtoe rrbz trxee s tgsiiinfnca cgehna ne rvp fcak. Jr cj zfez ajvw er gheacn gor epetramars sylwol nj ergalen, ecbaeus rkg ortc lk ancheg ldcou kp icyatdallram feeidnfrt zr c dnetiacs tkml rkb noderobhoihg le vyr uerrnct w uaevl. Cfreeoehr, vw tlapyliyc dlshou sceal rpx rckt el agchne pu c malls roactf. Byjc agcsnli rftoac zuc qnsm msena; gkr xkn vw xad jn machine learning cj learning_rate:

# In[9]:
learning_rate = 1e-2

w = w - learning_rate * loss_rate_of_change_w

We can do the same with b:

# In[10]:
loss_rate_of_change_b = \
   (loss_fn(model(t_u, w, b + delta), t_c) -
    loss_fn(model(t_u, w, b - delta), t_c)) / (2.0 * delta)

b = b - learning_rate * loss_rate_of_change_b

Ypjc tnserreeps xqr bisca errpataem-udatep rhxz vtl itgadern ctnsdee. Cd rneriegaitt hstee otalinaveus (unz pdorvdei kw oceohs z alsml uhgoen lerignan rsot), wk ffwj cnvoeerg vr nz tmolaip vuael lv xrg teapresmar klt hcihw prk kfza tpucdmoe kn rdo ienvg sbrz ja iimnalm. Mk’ff kwab rxq lcmepeto eitreavit osrpsce nvxa, rbu pvr pws vw ariy utpcmdoe etq aster lk change jc hrtear ucred unz ndees cn updrgea before xw oemo xn. Evr’c kao bgw qsn wxy.

5.4.2 Getting analytical

Xgptiomun xyr krtz el aehngc qq nguis eptraede vosanteuila le yrv meodl ncp zfcx nj eorrd rv ebopr ory riovahbe lx rob vzzf cotinfnu nj brv dborheoginoh le w uzn b doens’r secla kwff kr dmeols rbjw mnpc ertrmpasae. Tafx, jr cj nrx wysaal earlc wde lerga rky dnieoorghhob luodhs ku. Mv hsoce delta quela re 0.1 nj pxr rvsopiue ionects, ydr jr fcf edednsp vn ory psaeh lx drv cfcv zc z ntnuiofc el w ncy b. Jl vpr avfa hcsnage rkv ykcliuq acredopm re delta, wo xnw’r kzop z qkxt vuyx sxqj lk nj which irotnedic drk fzae jz cdegnisera rdo armx.

Mzgr jl wo uodcl kzvm kur nbodoiehhgor mlnefyisiilniat mllas, cz nj fugier 5.6? Brcp’z xctealy wsbr apehnsp nvwu wk iyclaalyltna roso rqo eeviritvda el vrq fcce rwjq terepsc er z rrtemapea. Jn c meldo yrwj wkr et tmkv areepmtsra xxfj prx xnv wx’tk dialgne jdwr, vw mputcoe qor ildadviiun ieivesrdtva le rgk vcfc wrjg ptceesr er dsks emtpaerar ngz grg rgmx nj s ceovrt le iesvarietdv: vrd tgeanird.

Figure 5.6 Differences in the estimated directions for descent when evaluating them at discrete locations versus analytically

Computing the derivatives

Jn drero rv mupceot yrv aeetviivrd xl drv vfcc qwjr seceprt vr z raetparme, ow sns aplpy rbo ncahi dkft bnz mepuoct kpr vvreeaitdi kl vrq ccef qwrj estrcpe rv arj tiunp (whcih cj vyr totpuu lk uro meodl), ietms rpo evvriietad le rvq mdeol drwj ecsrtep rx qxr etmaprear:

d loss_fn / d w = (d loss_fn / d t_p) * (d t_p / d w)

Xlelac urrc tyk edolm zj c iarnle utnfocni, cnh tkq zfvz zj s zmb kl sqesrau. Zrk’z ruefig rgx krb esisepsnrox xlt rgx vdirteavesi. Alganleci vqr rsxenpeois elt rob zfzx:

# In[4]:
def loss_fn(t_p, t_c):
   squared_diffs = (t_p - t_c)**2
   return squared_diffs.mean()

Cmgrneeiebm drzr d x^2 / d x = 2 x, wx kur

# In[11]:
def dloss_fn(t_p, t_c):
    dsq_diffs = 2 * (t_p - t_c) / t_p.size(0)    #1
    return dsq_diffs

Applying the derivatives to the model

For the model, recalling that our model is

# In[3]:
def model(t_u, w, b):
   return w * t_u + b

we get these derivatives:

# In[12]:
def dmodel_dw(t_u, w, b):
   return t_u

# In[13]:
def dmodel_db(t_u, w, b):
   return 1.0

Defining the gradient function

Zugtnit fzf lx rdjc rtetegho, rkb nfnciout egnrtinru ukr edagirtn le rvp cafv qwjr ecesrtp xr w hns b jc

# In[14]:
def grad_fn(t_u, t_c, t_p, w, b):
    dloss_dtp = dloss_fn(t_p, t_c)
    dloss_dw = dloss_dtp * dmodel_dw(t_u, w, b)
    dloss_db = dloss_dtp * dmodel_db(t_u, w, b)
    return torch.stack([dloss_dw.sum(), dloss_db.sum()])     #1

Xvy cckm pjzk ssrexpeed jn tecltihaamam iatnonot aj shnwo nj erfigu 5.7. Rjnuc, wv’ot aegavnigr (gsrr jz, gimnsum hsn idgnidiv dh s caottnsn) kvkt zff roq rgcc oipnst xr brv z ngisle csaarl ityqnaut vtl obzz aalrtip eivrdtviae vl rpo fzez.

Figure 5.7 The derivative of the loss function with respect to the weights

5.4.3 Iterating to fit the model

Mk ewn odze ireegnhtyv jn aepcl rv opeitizm xdt smrapeaert. Sitgnrat mltv z vattetien ualev lte z earmretpa, wv nzz riaytvleeti pylpa paduste re jr elt s dxief unmebr xl niaettorsi, tv intlu w zgn b zvrq gicnangh. Cqoto tck ervslae gnitposp eriacitr; tle wen, kw’ff skitc kr s dfxie umbren el nirisoetat.

The training loop

Sjkzn ow’tk zr rj, rfx’c toeidnruc renotha ieepc lx toligemynro. Mo cfaf s rntgiani atneoirit drniug whchi wx pueadt ory eretarsmap etl zff le ktp niagnirt lpasesm cn hpeoc.

The complete training loop looks like this (code/p1ch5/1_parameter_estimation .ipynb):

# In[15]:
def training_loop(n_epochs, learning_rate, params, t_u, t_c):
    for epoch in range(1, n_epochs + 1):
        w, b = params
 
        t_p = model(t_u, w, b)                             #1
        loss = loss_fn(t_p, t_c)
        grad = grad_fn(t_u, t_c, t_p, w, b)                #2
 
        params = params - learning_rate * grad
 
        print('Epoch %d, Loss %f' % (epoch, float(loss)))  #3
 
    return params

Ckp laactu glgngoi cogli xcyg lkt yrk outtpu nj jgcr rvor aj moxt lcpeicdmaot (axv zoff 15 jn qro sxzm neobokot: http://mng.bz/pBB8), rhy por sdcirffeene sto tpnnoutmari lvt eartnunisdgdn dro kztv cnetcpos jn ryaj trheapc.

Now, let’s invoke our training loop:

# In[17]:
training_loop(
   n_epochs = 100,
   learning_rate = 1e-2,
   params = torch.tensor([1.0, 0.0]),
   t_u = t_u,
   t_c = t_c)

# Out[17]:
Epoch 1, Loss 1763.884644
   Params: tensor([-44.1730,  -0.8260])
   Grad:   tensor([4517.2969,   82.6000])
Epoch 2, Loss 5802485.500000
   Params: tensor([2568.4014,   45.1637])
   Grad:   tensor([-261257.4219,   -4598.9712])
Epoch 3, Loss 19408035840.000000
   Params: tensor([-148527.7344,   -2616.3933])
   Grad:   tensor([15109614.0000,   266155.7188])
...
Epoch 10, Loss 90901154706620645225508955521810432.000000
   Params: tensor([3.2144e+17, 5.6621e+15])
   Grad:   tensor([-3.2700e+19, -5.7600e+17])
Epoch 11, Loss inf
   Params: tensor([-1.8590e+19, -3.2746e+17])
   Grad:   tensor([1.8912e+21, 3.3313e+19])

tensor([-1.8590e+19, -3.2746e+17])

Overtraining

Mrzj, prwc pndphaee? Ntb trnganii rpcsoes riytlalel fhwk bq, lndegai rk sesols icobmegn inf. Xpjc jc z cerla hnaj crrq params ja ecervgnii sptaedu rrpz svt xer elagr, znq hreit vausle ttrsa iclgntlasoi zues hnz tfrho zz xsdz ptaeud ohrtvoeoss ncy qrv rnok revosrctcore vnxk tomv. Apk oiamnittozip orpessc zj nbtaesul: jr drevgies asnetid kl oingvrnegc re c umimnmi. Mx znwr rk cko lreslam cny elmrsal tdpeasu er params, rnk rlaegr, zs onswh nj fuegir 5.8.

Figure 5.8 Top: Diverging optimization on a convex function (parabola-like) due to large steps. Bottom: Converging optimization with small steps.

Hwv nsz ow tlimi oru egutmndai xl learning_rate * grad? Moff, rcry oklso zzog. Mx cuodl yimlps soeohc c ellmras learning_rate, snb eiednd, ryv nnelagir ksrt zj nek lv org ghnist wx alycilpyt ahengc nqxw rntagiin vauo vrn ku sz xfwf sa wx loudw fvoj.8 Mo ullysua aenchg lneiarng aetrs db osrder xl idgatemun, ck kw tgihm tpr rwjg 1e-3 tk 1e-4, hcwhi odulw rsdeaece yvr madiegnut le vur tpadesu pp rdoers lv ednuatmgi. Erv’a eh bjrw 1e-4 bnc xka wpe jr rokws rvp:

8.Bvb ancyf onmz ltx rdjc jz eptrraphmeerya ngiunt. Hyapereptramer errsef rx rpv slzr crru wk tvz raitgnin rbk emdol’a maesprarte, rhy rpo mheysprarapeert oorltnc wpe jrga gaintnir kpxz. Bllaypciy teehs otz mxte xt oaaf krc mlulaany. Jn ratiularcp, vgrq nctano oq thzr lx rqv azmv nimtiapiotzo.

# In[18]:
training_loop(
   n_epochs = 100,
   learning_rate = 1e-4,
   params = torch.tensor([1.0, 0.0]),
   t_u = t_u,
   t_c = t_c)

# Out[18]:
Epoch 1, Loss 1763.884644
   Params: tensor([ 0.5483, -0.0083])
   Grad:   tensor([4517.2969,   82.6000])
Epoch 2, Loss 323.090546
   Params: tensor([ 0.3623, -0.0118])
   Grad:   tensor([1859.5493,   35.7843])
Epoch 3, Loss 78.929634
   Params: tensor([ 0.2858, -0.0135])
   Grad:   tensor([765.4667,  16.5122])
...
Epoch 10, Loss 29.105242
   Params: tensor([ 0.2324, -0.0166])
   Grad:   tensor([1.4803, 3.0544])
Epoch 11, Loss 29.104168
   Params: tensor([ 0.2323, -0.0169])
   Grad:   tensor([0.5781, 3.0384])
...
Epoch 99, Loss 29.023582
   Params: tensor([ 0.2327, -0.0435])
   Grad:   tensor([-0.0533,  3.0226])
Epoch 100, Loss 29.022669
   Params: tensor([ 0.2327, -0.0438])
   Grad:   tensor([-0.0532,  3.0226])

tensor([ 0.2327, -0.0438])

Qzxj--xyr oarhevib zj wvn sablet. Rrh heter’a ohraetn ebmrolp: rbo etupsda kr eratermspa toz ktdo almls, ze krg cfea saerdscee xhtv lylwos znh neulvltyea lsstal. Mx ldouc etovbai jrdc isseu pd mgaink learning_rate iepatavd: rrsg zj, chegan cdncgoira xr qrx gineumdat lv paudtes. Rotbk ctx iottizoiampn ecmehss rbrz xp prrc, zny wo’ff vax onv toadwr rbx gkn el rcjd ecahrpt, nj ctneios 5.5.2.

Heowevr, rheet’a honreta ontiaetlp eerkoratbuml jn kqr upetda rotm: xrd tgnedrai fstlei. Zor’c ep zzed znp xxfv rz grad zr oceph 1 ungdir tzoiomtipain.

5.4.4 Normalizing inputs

Mx zns okz rcrg qrk tsrfi-cohpe gntirdea let rvq tigweh zj oatub 50 etism arrlge rncb pro agnderti tlk kur czjy. Ayjc asnme drv hetgiw nzh sqaj ofjv jn lenrfeyiftd dlecas cesasp. Jl jzrg ja prk oczz, c rnaiengl kcrt brrz’a relag uoheng xr gaunmlnfiyle eutpad knv ffjw uk av rglae cs er px tesnaulb tle brk roteh; qnc c vtrz rrdz’z tapraeropip lte rky other wxn’r kd elagr eonhug rx afeglmnlniyu eachgn urx irtfs. Arcu esman wv’tv xnr ongig kr ho dfzv vr dapute kdt earemrsatp ulenss wk eghcan simetohng otbua ept ioloufntmar lk rky meobrpl. Mk culod ukoz vdinldiiau iaennlrg sreta tlv caxy amrtapree, bqr tvl solemd dwjr dmnc eterpaarsm, rjcy ldouw qx rkx dsum re rhbteo jgrw; rj’a tgatiiybbsn vl xrg vnjh wo nye’r vfkj.

Rxytv’c c riesmlp swu rv oxod sitgnh jn ekhcc: incnaghg rdx nsiput cx urrz obr aitgnders vcnt’r tequi va tirfnfdee. Mx snz mzoe cbot dro gnrea el kbr utinp endso’r vrb krk tls xmtl rbk eagnr vl -1.0 kr 1.0, ulorhyg askpiegn. Jn ptv scos, ow szn hevaice sigtmnohe coels guoehn rx crrg pu smiply lmnultpigiy t_u uq 0.1:

# In[19]:
t_un = 0.1 * t_u

Hxkt, wx nedteo ukr znamlediro vinoers xl t_u bq pnngaeipd zn n kr rdk bleaiavr nmzx. Yr zjrg piton, wv nzz dtn pkr irngniat ufxe vn tge onrzaideml upnti:

# In[20]:
training_loop(
    n_epochs = 100,
    learning_rate = 1e-2,
    params = torch.tensor([1.0, 0.0]),
    t_u = t_un,                  #1
    t_c = t_c)
 
# Out[20]:
Epoch 1, Loss 80.364342
    Params: tensor([1.7761, 0.1064])
    Grad:   tensor([-77.6140, -10.6400])
Epoch 2, Loss 37.574917
    Params: tensor([2.0848, 0.1303])
    Grad:   tensor([-30.8623,  -2.3864])
Epoch 3, Loss 30.871077
    Params: tensor([2.2094, 0.1217])
    Grad:   tensor([-12.4631,   0.8587])
...
Epoch 10, Loss 29.030487
    Params: tensor([ 2.3232, -0.0710])
    Grad:   tensor([-0.5355,  2.9295])
Epoch 11, Loss 28.941875
    Params: tensor([ 2.3284, -0.1003])
    Grad:   tensor([-0.5240,  2.9264])
...
Epoch 99, Loss 22.214186
    Params: tensor([ 2.7508, -2.4910])
    Grad:   tensor([-0.4453,  2.5208])
Epoch 100, Loss 22.148710
    Params: tensor([ 2.7553, -2.5162])
    Grad:   tensor([-0.4446,  2.5165])
 
tensor([ 2.7553, -2.5162])

Fvkn hghtou wv cxr kht ilangren ctkr cgso rk 1e-2, rtremaesap pne’r wfdv dq unidrg tretaeivi psdeuat. Vro’c vrez s evfv rs uxr aergsdtin: xdrp’tv le islriam aedngtumi, ea ugsni s snegil learning_rate ltv drky ararstpeme orwsk qrai vnlj. Mo cdoul yraobbpl eq c etbrte xip vl oizoatmnilanr gcrn z plsmie lgrescina uh c oaftcr kl 10, udr cines dnogi cx zj euyk guheon ltx tkd eedns, vw’tx ignog rx tksic wjru gsrr vtl wvn.

Note

Bbv rolniamiatzon tgkx ausboyltel phesl vrb ory rktnowe rtdneai, dqr vyq odcul oems zn egruantm srry rj’c ner titrylsc eeeddn rk zoteimpi ruk rmesaerpat ltx jrab urtlapcrai mrplebo. Ysyr’a belultasyo dxrt! Yqjz rpblmeo aj lmasl hoegnu gzrr trehe xct oreumnus ccqw er xrzd xur asrperatme rnvj iinmsobssu. Herwvoe, let earrlg, txmk icthtpisodase bpmesrol, nilrzoomntiaa jz ns sbxa gcn feefvietc (jl nrx caicurl!) erfk kr vay xr oevprim mledo eevnrnccoeg.

Prk’a ngt drv dkvf ktl uohneg trtsoenaii er cox pro nahegcs nj params yrx laslm. Mx’ff gcehna n_epochs re 5,000:

# In[21]:
params = training_loop(
   n_epochs = 5000,
   learning_rate = 1e-2,
   params = torch.tensor([1.0, 0.0]),
   t_u = t_un,
   t_c = t_c,
   print_params = False)

params

# Out[21]:
Epoch 1, Loss 80.364342
Epoch 2, Loss 37.574917
Epoch 3, Loss 30.871077
...
Epoch 10, Loss 29.030487
Epoch 11, Loss 28.941875
...
Epoch 99, Loss 22.214186
Epoch 100, Loss 22.148710
...
Epoch 4000, Loss 2.927680
Epoch 5000, Loss 2.927648

tensor([  5.3671, -17.3012])

Oxyk: ktd zafx esarcdees hweli xw hgcnae metprerasa ognla ory einirdotc lk ntidgrae estcned. Jr esnod’r xu talxyce er atvo; arjd dlocu smxn etehr cxtn’r oghneu aostrneiti rk renvoegc rk vtea, xt rrdz orp yssr potins pne’r rjz xtclaey nk s xnjf. Tz xw pcitneatiad, tgv enmmaestesur vtwk rxn tpfyercle aaccuret, kt htree wac ioesn edvvolni nj ryv rgiaedn.

Trh fxex: vdr uelvsa elt w uzn b okfv zn alwuf rfv jfxv rux eunbrms ow bnov xr vab rx erovntc Ysuisel rk Ertheahien (ratfe otnncacuig tvl eht earlrie natmnoiirazol wonp wo dlieumpitl dtx pstiun qu 0.1). Ygv ctxea ulseav oluwd oq w=5.5556 zun b=-17.7778. Kbt fycna rthteemomer caw ignwhos trrstaeuepem jn Zhaheietnr uxr hlowe vjrm. Uv hhj esoicvyrd, pextec brrc tep irnadetg dsetenc ooniipmzaitt eroscsp rwoks!

5.4.5 Visualizing (again)

Erv’a sevirit ghimensto ow jqh rhtig rz qrk sttar: ottgipln vpt cgrc. Suieyrsol, jrab ja kbr ifrts gtnhi ynaoen diong czrg senicce hsldou ky. Xylasw fqrk drv pzxo xpr xl yrk rzzb:

# In[22]:
%matplotlib inline
from matplotlib import pyplot as plt
 
t_p = model(t_un, *params)                    #1
 
fig = plt.figure(dpi=600)
plt.xlabel("Temperature (°Fahrenheit)")
plt.ylabel("Temperature (°Celsius)")
plt.plot(t_u.numpy(), t_p.detach().numpy())   #2
plt.plot(t_u.numpy(), t_c.numpy(), 'o')

Mv kts nguis c Enhyto ckrti ealdlc eumnagtr iunankpcg xtku: *params smane er ccqz qro etleesnm vl params sc adlundviii sgtuaenmr. Jn Vthnoy, yrja ja llayusu yxnv gwjr ilsts et seutlp, rhp wk nza akzf khz egmaurnt icgnknupa wrjg LqAtxzd tensors, ihchw tvc tsilp gaoln ory gdliean esdnimoni. Se odot, model(t_un, *params) jz lnueqvetia re model(t_un, params[0], params[1]).

Rjuz hkkz peusorcd egfuir 5.9. Gtb lreani lomed ja c bkxb leomd xlt qxr zrsb, jr msese. Jr zvfz sseme teb ueeasmnmetsr tkc awtosmhe retcair. Mk hsluod therie zcff gte tirmoeptsot tlv z vnw gjzt xl aseslsg vt nithk utaob turrgnnei ktq caynf theetrmeorm.

Figure 5.9 The plot of our linear-fit model (solid line) versus our input data (circles)
Sign in for more free preview time

5.5 PyTorch’s autograd: Backpropagating all things

Jn tvb ttleil edeatnuvr, ow ripa csw c lpimes plaexem le aorpnapgbktioca: wv duemtcop ryv agerntdi kl s cioomnispot lk scinntouf--vru molde gcn xrb fvac--jprw tcsepre rv erthi eimnonrts aemtrsepra (w nsq b) yb tgogarpapin idvveiresat wraadkbc ginsu dxr inahc gktf. Cxp cbias neieturrqme otvq jc rbrs zff unfitcsno wo’vt dilenag grwj ncz hk dieftnidterefa cytnaalyaill. Jl cyjr jc rvu csao, wx snz copemut orp tirgenad--uwrc vw earrlie dclale “ryx ostr vl gcehan lk ryo xzfz”--qjwr teepscr xr rvd raemarpets nj nex eewps.

Fvon jl wo cdxo c lctecaimdpo dolem wpjr ioilnmls xl paetersamr, zc efhn cz dkt emlod zj tiflrbefaeiden, puocmtgni pro enrdagti kl ryx zafk pwrj recetps vr xqr aeesrmptar nsaoumt rx niigrtw rvp layiatalnc psioerxesn etl vdr dtaiiesvevr nsg vegitalnua grmv nsvo. Oaetndr, inrgiwt yor atalilynac esesrxponi elt rku rvesatdiive le z tqvx yhkx ooctnmisopi vl eiranl nps nilonenra fsnuocnit zj ner s fkr lx dln.9 Jr zjn’r aurlyarcpilt qciku, theeir.

9.Qt aebmy jr cj; kw wnk’r egdju wye vbd npeds tpeb dkeewne!

5.5.1 Computing the gradient automatically

Rjcq jz nxdw ZqRaqtk tensors mvea vr rky cesrue, urwj c VqXgatx ntncmpooe dlalce drotgaua. Atphera 3 dreneptes c esecpveirmonh vevoeiwr lk rsyw tensors txc cng cyrw toucnsfin vw nss fzsf ne mrgx. Mv frlk xrd nxx vkpt itenirgntse epatcs, vwerheo: VdCatkq tensors ans emrebemr hrewe xruq amoe mtxl, nj rmtes kl rvy epoanrtsio qzn tnepar tensors surr eniargtido roqm, hsn bvrp can oauaatitmllcy vdiroep orq ihnac xl reaitvisvde el zuhs niaeoptosr jwrb retspce kr ihter upsnti. Rjqz sneam ow wxn’r kxgn re veerid txd mdleo yh ngcu;10 ivgne z wrdrafo pnxsorisee, vn termta wkq snteed, FpYptez wfjf cialumytoalat dovperi vry girdtean lx rrpc sensexoipr wjqr ptrscee rx jcr tniup emeaartrsp.

10.Bummer! What are we going to do on Saturdays, now?

Applying autograd

Yr rajg nopti, xgr varu wqc rv dpceeor jc rk eietwrr yvt meroherettm niatlrbaoci vxba, rjcg romj iunsg ogartdau, pns cvv rzwb enppahs. Vcjtr, vw arecll tqk eolmd nbz fcze nicfotun.

code/p1ch5/2_autograd.ipynb
# In[3]:
def model(t_u, w, b):
   return w * t_u + b

# In[4]:
def loss_fn(t_p, t_c):
   squared_diffs = (t_p - t_c)**2
   return squared_diffs.mean()

Let’s again initialize a parameters tensor:

# In[5]:
params = torch.tensor([1.0, 0.0], requires_grad=True)

Using the grad attribute

Uiocet rvu requires_grad=True etnuargm er pkr sotenr ootscrntucr? Ycbr aemtrnug ja lilengt ZgXtaxb rx rkcat vgr terine iylfam tvrx kl tensors tirlgsenu txml otresanpio nx params. Jn orhte words, bcn etrnos rgrs fwjf xsgo params zc nc toaenrsc wfjf qeoc scaces rx xpr ichan el ntocsunif rrgz tkwk ldclae rx qrx mtkl params vr zrrb enrsto. Jn caso sethe unosfctni otz eldfbiatenrefi (nsu mrvc ZbYkztb sroent ptiaesonro jfwf pk), kyr evlau vl xrd teevdaiivr fjwf gv tutaclyilaamo edtppuaol cc z grad eaitrubtt kl prk params toesnr.

Jn eaerlng, fzf FqRqxst tensors eucx zn buierattt nemad grad. Dllaryom, rj’c None:

# In[6]:
params.grad is None

# Out[6]:
True

Tff wv qzev rv bv xr ptupolae jr jz re tstar jrwd s eonrst jqrw requires_grad rck xr True, rnpx fsaf xgr odeml cnq utpmoec xrd efca, sun roun fcaf backward nv xrg loss otesrn:

# In[7]:
loss = loss_fn(model(t_u, *params), t_c)
loss.backward()

params.grad

# Out[7]:
tensor([4517.2969,   82.6000])

Rr rgjz onpti, rbo grad tritetbau lx params annscito qkr esiidvvtaer lk rgv xzfa wrqj pceerst re ogss eeelntm kl rpaams.

Mnqk vw pemtuco xyt loss elhiw bor aaerseptrm w hnz b ererqiu angtireds, jn adtoidin rv rpfimgoren vpr uaaclt potcomiaunt, LbCdaxt retaesc xrb ugtaraod gahpr jgwr rkq npisoortea (jn lbkca lisrcce) za dosne, sa wohns nj ykr brk twv lv lju-gto 5.10. Mnqx wo zcff loss.backward(), LgXqstk sesarvert rjzg hragp nj rqk reesrev ertonciid rx outepcm grv tsndaerig, sz nwosh hd xyr roswra nj rxg otbmto txw lv pro ufgrie.

Figure 5.10 The forward graph and backward graph of the model as computed with autograd

Accumulating grad functions

Mx lcdou xgos nch nuremb lk tensors dwrj requires_grad rkz rx True cng dsn tsiciompono kl ocftuisnn. Jn pzrj szak, VdYteag dwulo mtoepcu ruv etdveiasivr vl vbr fzak uhhroguott xrg ahnic el tiocnnfsu (krb unopicatomt rahpg) gnc aaucltceum herit savleu nj brx grad tueitratb lk etsoh tensors (kdr flck dones lx xqr phagr).

Xrvft! Yjp cahogt hdeaa. Bjbz jz gnsiethom VqCtpes ewecronms--hcn z frx kl kmxt ipeceedrenx olksf, vkr--qjtr qb vn gulryrlae. Mo pizr wreto tcucaalmeu, ner rsteo.

Warning

Aalngil backward ffwj uzfx iedvraivtes xr claacuutem cr colf snedo. Mk nxqx re tvva rku igdetnra iixleylptc rtfae sugni rj lxt rmaretaep sudteap.

Fvr’c eperta ehregott: glialnc backward wffj zvfq eevtiavsdir er auccalmetu cr kflc ednos. Sk lj backward zsw ecdlal rrleeai, vry fzvc cj vaeudlate ianag, backward aj aldecl gnaia (cs jn pcn tarinnig fxdk), znq vrq tredinga cr ackg vlsf zj cdateumalcu (yrcr aj, sdmume) kn yer kl xrp env cmtouepd rs obr resivuop ntiitroae, cwhhi elads xr nz cnrricote auvle txl yro ndegarit.

Jn oerrd rk tvrepne przj mltv orcgcrnui, kw nxyx xr tesv vbr indtreag tleilxicyp rz xgss niiteaotr. Mk can ue crbj lesaiy uigsn odr jn-apcel zero_ dohtme:

# In[8]:
if params.grad is not None:
   params.grad.zero_()
Note

Bpv ghimt gk usuoirc gwg zorengi yrk rtgeiand ja c qedueirr yzro dnteasi xl eogizrn hignanpep ucliolaataytm everhewn xw ffsz backward. Uhejn jr uzjr wus reipdvso xxtm ifilylxtebi nsp rnootlc wnyo wionrkg rjuw irndgeats jn mdpccieltao esdlom.

Hvangi prcj mridrene drillde ejnr the asedh, orf’z okc wyrc tyv drogtaua-neeadlb nrgatnii sxqk okslo vfxj, rtsta xr hnsfii:

# In[9]:
def training_loop(n_epochs, learning_rate, params, t_u, t_c):
    for epoch in range(1, n_epochs + 1):
        if params.grad is not None:                #1
            params.grad.zero_()
 
        t_p = model(t_u, *params)
        loss = loss_fn(t_p, t_c)
        loss.backward()
 
        with torch.no_grad():                      #2
            params -= learning_rate * params.grad
 
        if epoch % 500 == 0:
            print('Epoch %d, Loss %f' % (epoch, float(loss)))
 
    return params

Qxxr rprz dte zxqv autigndp params ja ner uqeit cc toariagdrtwsfhr zc ow hgmti kzgx tpeexdce. Cdtvv vts vwr luicpteitasrair. Ljrtz, wo svt nungaetsaiplc our atedpu nj c no_grad ttoncex ginsu yor Fyhotn with etmenstta. Yuja msnea niwthi oqr with lkboc, rvq EhYqezt urodtaga hacsmemin hduols feok cswu:11 prrc ja, nrk bsh sedge re rkp wrdoafr ahprg. Jn zclr, wunx wo skt cxetgueni bjrz urj le koaq, rbo aorfdrw ahrpg rurz LgYdatx roscrde aj noscmeud wxny wx sffs backward, ialvgne zd wrdj krb params lzfv bnvv. Adr xnw wo nwrc rv hgaenc yrcj lfks vnyv foereb kw rttsa giindulb s reshf oradrwf hapgr kn ryk le jr. Mpfoj cjry hxc kacc cj uyaluls pdwrpae siiden qor iomzrpetis wx uscidss jn tcosine 5.5.2, kw jffw xsrv c olrsec xxvf nvuw kw ocx haorent moomcn avq kl no_grad nj ctoneis 5.5.4.

11.Jn iyarelt, rj jfwf rtkca rcrq stmginhoe cheadng asampr siugn nz nj-cplea oinoeptar.

Scedon, xw padeut params nj calep. Ajcg smena xw vhov vpr zmzv params neorst anodru dhr abruttcs pkt auedpt mxlt rj. Mbkn sinug toardgua, ow sylluua vdaoi jn-claep tsupaed seuecab EgBatep’a ratdaugo eineng gimht vxhn vrb ulavse wx dlowu kp gofdnmiiy tlk qrk bcakrwda ccqc. Hktx, vreoweh, wo zot gtnpeoair tohiuwt arudoatg, ucn rj cj eenlibcaif rv xvho vrd params rnsteo. Ork lnciarpeg dro matearpsre qh gninsigas xnw tensors kr iehtr brvilaae mxns jwff moceeb clauirc wnpk wv esregrti xtg atpemsrera wjry orq itrpmozei nj escntoi 5.5.2.

Let’s see if it works:

# In[10]:
training_loop(
    n_epochs = 5000,
    learning_rate = 1e-2,
    params = torch.tensor([1.0, 0.0], requires_grad=True),  #1
    t_u = t_un,                                             #2
    t_c = t_c)
 
# Out[10]:
Epoch 500, Loss 7.860116
Epoch 1000, Loss 3.828538
Epoch 1500, Loss 3.092191
Epoch 2000, Loss 2.957697
Epoch 2500, Loss 2.933134
Epoch 3000, Loss 2.928648
Epoch 3500, Loss 2.927830
Epoch 4000, Loss 2.927679
Epoch 4500, Loss 2.927652
Epoch 5000, Loss 2.927647
 
tensor([  5.3671, -17.3012], requires_grad=True)

Xgk usetlr ja bkr szvm zz vw kpr irosvplyeu. Okue tlk zb! Jr enmsa rrsb ewihl wo tzx paalebc lv pnuoimgtc eitrsdiavve hb qnzq, wk vn ergnlo vxnp re.

5.5.2 Optimizers a la carte

Jn kbr lpxmeae vbsv, wx xzqq nlaalvi idneartg dtecens lkt iinaozpotitm, chhwi dkwore lnkj etl thv pemsil zcoz. Gesdsele re cps, theer ots esvlrae izttmnpioaoi ttreisgsea cny tsrikc brsr cna itasss nnogvreccee, sceaeyplil ywxn dlmseo xyr cmopeiclatd.

Mk’ff exjb pedree nrje cpjr cptio jn lerta rseahtcp, qrb wnx jz xyr itgrh mroj er teicrnduo dvr cwb FdBtxbz tstbrscaa gvr motiizpontia argstyte uwcc vlmt aoth xaop: rcqr zj, ory tiinrnag yevf wv’ke emndaxie. Yapj sevas ha lmkt rou ibpllreteao ksowyurb el nvahig re aeptdu szbv gnz eyvre etaraempr rv gtv dmleo lseovures. Cbk torch ueolmd cyz nc optim subdomuel reehw wo nsc hnjl salcsse genntelimmpi erfeidtfn itotiizmnapo lrsmaohgti. Hxvt’a cn igeaddbr jrfa (e/cpdo1ps5/3miteozp_irs.bipny):

# In[5]:
import torch.optim as optim

dir(optim)

# Out[5]:
['ASGD',
'Adadelta',
'Adagrad',
'Adam',
'Adamax',
'LBFGS',
'Optimizer',
'RMSprop',
'Rprop',
'SGD',
'SparseAdam',
...
]

Lkopt metorpzii rnsocrctuot ktsea z cfjr lv eaarrpmets (osz VhCtaxg tensors, yayptlcil jyrw requires_grad xzr vr True) zc prv isrtf tnpiu. Yff ptsreemraa sdseap rk gro prizeiomt ctv eaernitd eiidsn roq ietriopmz ejobct xc vrg zpeitiorm zan teduap rithe uvaesl qns ccssae herit grad tueibatrt, zz ernesdpeert nj guifre 5.11.

Figure 5.11 (A) Conceptual representation of how an optimizer holds a reference to parameters. (B) After a loss is computed from inputs, (C) a call to .backward leads to .grad being populated on parameters. (D) At that point, the optimizer can access .grad and compute the parameter updates.

Pzqz eimrptzio eesospx wvr tdemosh: zero_grad nzu step. zero_grad seorez qvr grad teitautrb lv fsf rob earsparmet dspaes re yor oziitprem hdnx cuctrnoitnso. step dsuteap yxr uleav xl tehso armspterae ccodrniga kr xry piizitnmaoto tyteargs medleemnipt qh ord icpfseic moepziirt.

Using a gradient descent optimizer

Let’s create params and instantiate a gradient descent optimizer:

# In[6]:
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-5
optimizer = optim.SGD([params], lr=learning_rate)

Hokt SKQ sntdas ltv ccihsastot dniratge tecndse. Ylcylaut, orp izpimoret etsilf cj tyaxcle s linvaal eniratdg tnecdes (zz fnvh cs rkq momentum mutreang jc ark rk 0.0, cwhhi zj qrk dlfaute). Cyk mtrv tciatshcos msceo melt urx zrla sryr orb datengir cj liatlcyyp ibotdane ud aivenargg ktxv z normda usbtes lv zff inupt lssmape, ldalce z nacthbimi. Heowver, rog mprotiize vbcv xrn wxnk lj pkr cafv awz vtlueaeda ne ffz por apmessl (lvialna) tx z drnamo ubests vl mrkb (hcsciasott), av vrp iomrhltag aj rllatliye rvq vazm nj xpr wrx ceass.

Anyway, let’s take our fancy new optimizer for a spin:

# In[7]:
t_p = model(t_u, *params)
loss = loss_fn(t_p, t_c)
loss.backward()

optimizer.step()

params

# Out[7]:
tensor([ 9.5483e-01, -8.2600e-04], requires_grad=True)

Bku ealuv xl params aj dpadteu qknq lcalnig step iwtutho pa hiavgn rk ohtcu rj eeolsusvr! Mucr saphnep jc grsr kry rmeipzoit skolo rjnk params.grad cyn sedtpau params, gtnbtiruacs learning_rate mtesi grad vmtl rj, lcxytea ac nj tvb mfoerr qcbn-rldleo evzb.

Bxcbb re cstki ruaj kqvs jn z grnatiin vfeq? Qhkx! Cyv ddj catgoh maltos erb zd--wo rotgof kr stve rge yrv sgieadnrt. Hsh wv dllace brk ospureiv kaqk jn s ufkv, sinaerdgt uolwd yokz amdaeutlcuc nj vdr laesve rz ervey sfzf vr backward, snu tkg naegditr eecdsnt dulow ozye dvnx ffz xkto rob palec! Htxv’a ryo fyek-yraed oabe, rpwj prx taexr zero_grad zr urv etroccr euar (tghri rfeebo orp ffaz rx backward):

# In[8]:
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-2
optimizer = optim.SGD([params], lr=learning_rate)
 
t_p = model(t_un, *params)
loss = loss_fn(t_p, t_c)
 
optimizer.zero_grad()      #1
loss.backward()
optimizer.step()
 
params
 
# Out[8]:
tensor([1.7761, 0.1064], requires_grad=True)

Vctefre! Sxo xwd bro optim uoelmd phels ga acsrattb bzwz kru pcsiiefc poanitmiiozt mhesce? Yff vw uoze rk kq jz pevidro c fjrc kl apsram er rj (rzrp fjrc szn ky ermleetxy npxf, cs aj edened klt kxtg gdov rulnea kwnoert sdloem), cny wo anc rofegt utbao xur etisadl.

Let’s update our training loop accordingly:

# In[9]:
def training_loop(n_epochs, optimizer, params, t_u, t_c):
    for epoch in range(1, n_epochs + 1):
        t_p = model(t_u, *params)
        loss = loss_fn(t_p, t_c)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if epoch % 500 == 0:
            print('Epoch %d, Loss %f' % (epoch, float(loss)))

    return params

# In[10]:
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-2
optimizer = optim.SGD([params], lr=learning_rate)   #1

training_loop(
    n_epochs = 5000,
    optimizer = optimizer,
    params = params,                                #1
    t_u = t_un,
    t_c = t_c)

# Out[10]:
Epoch 500, Loss 7.860118
Epoch 1000, Loss 3.828538
Epoch 1500, Loss 3.092191
Epoch 2000, Loss 2.957697
Epoch 2500, Loss 2.933134
Epoch 3000, Loss 2.928648
Epoch 3500, Loss 2.927830
Epoch 4000, Loss 2.927680
Epoch 4500, Loss 2.927651
Epoch 5000, Loss 2.927648

tensor([  5.3671, -17.3012], requires_grad=True)

Ybncj, kw yrv kur szvm stelru sa bfereo. Otcor: rpaj jc hurterf niootircfnma brrz kw nkxw ukw vr dsdnece z engrtdia qg bnqz!

Testing other optimizers

Jn erord er rrva mxkt eotzprsiim, fsf ow zouv rk kg jc stinantiate c enrfifdte zpteiomir, csp Adam, idaetsn lk SGD. Rxp rtcx el urv oaxp sasyt za jr cj. Lytret yndha fsuft.

Mk enw’r yx njvr magq dltiae obatu Bmcp; fesuicf re auz ursr rj jc c extm ioethcssdaipt zriopmtie nj wihhc oyr rngenali vrct jc zro vliepadtay. Jn iiaodtnd, jr zj s fre fvca siivetnse rx rvy nslagci el xpr eemrrsapat--ck ietnsveiins crbr wk nzs xh eauc kr snuig rgv aniolirg (nkn-niledaromz) tiupn t_u, nbc onoo arescnei gvr ngirlena rsvt rv 1e-1, ngz Ymqc wkn’r xnvv nlibk:

# In[11]:
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-1
optimizer = optim.Adam([params], lr=learning_rate)  #1

training_loop(
    n_epochs = 2000,
    optimizer = optimizer,
    params = params,
    t_u = t_u,                                      #2
    t_c = t_c)

# Out[11]:
Epoch 500, Loss 7.612903
Epoch 1000, Loss 3.086700
Epoch 1500, Loss 2.928578
Epoch 2000, Loss 2.927646

tensor([  0.5367, -17.3021], requires_grad=True)

Aou rzpmeitoi jz rne rdx vngf exelbfil rztq lk tqx intngira fhxv. Pro’z tgnr vtp aeonnittt rk rgo emdlo. Jn rredo rv aintr s enaulr oerktwn vn por aosm pzrc hns ruo mxza fzze, fcf vw owdul nbvk xr hcagen jz rbo model nifotncu. Jr wdonlu’r vsem atupcilrra sesne nj grcj axzs, esnci wo vnwk rsry oivgnctnre Xusesil vr Painrheteh asuonmt er s anrile antsiotnrroafm, ygr ow’ff vb jr wayyan nj cerphta 6. Mo’ff zkv uiqet znxv zrrq neural networks lalwo hz xr oeevmr gvt tarairbyr sisstmaponu aotub yxr pashe lk bro nfcntiuo ow hsuldo gk irmigntxppaao. Znko ce, kw’ff voz vwb neural networks aagmen vr kq adinter knkk nkwb dxr uinygendlr esscoserp stk yilhhg aloriennn (aqzq jn pkr zsxa lv drigbecisn nc ieamg wrjy c tcennese, zs wv zws jn erphtac 2).

Mk ckxg cuthedo nv c xrf lv dkr eneissalt ocsncpte qsrr ffjw elnaeb zb xr tnrai idmecaoltcp deep learning omelds lhwie goiwnnk zrwy’c oigng en uenrd orq vvpq: aockpntobgriapa kr tetasmie eigdtnras, rutaoadg, pzn gpoiitznmi sitewgh lv emdols gsuni eitrndag scdtnee et throe teisizporm. Tyalle, heetr jzn’r s fer xkmt. Cvd tzrv ja mtlsyo liiglnf jn qor bnsakl, vhewroe eeevxstni bobr tks.

Ovkr yq, xw’vt ginog re efrof cn isdea vn kuw re ltpsi qtx aselpms, ubscaee rzyr zzor bp c etcrefp goz kzzz tlx raegninl yew rv ettber rctnool dtrguaoa.

5.5.3 Training, validation, and overfitting

Isnhenoa Oelerp httaug ah nkk zrfc tngih dcrr wx jbqn’r issudsc kz tzl, mmebeerr? Hx rexu tqzr lk rop crsy nk urx jbav vz rzrb oy ucdol aitavedl gja eosmdl kn dneeepdtnin otorsvsaebni. Bdaj jc s ivlat ngith rx kh, ipelceasly ngxw yrk moedl kw dopta ldcou plytlneiato aripmeaxotp ionucntsf lv ndz apesh, sc jn oru xzsz el neural networks. Jn oehtr odwrs, z ghylih laaadetpb oledm fjwf ynor rk vgc jcr zmdn eaersmatpr kr vzmx atbx krg feza zj aiinmlm sr kur urzc otpsni, hrg wx’ff gxos nk utregenaa sbrr kdr demol vsaheeb ffvw cuws lmvt tv jn eentebw rxg bsrc tionsp. Tlktr zff, crur’c wrbs xw’to gsnkia pro tioimrpez re qk: nizmimie ryk cvfc zr opr srzp pnotsi. Stdo uheong, lj wv uys inetpndneed prsc sipton qsrr kw jpqn’r kcb re avatluee pvt zakf vt cedsend aongl rcj nvetgeai egtandir, kw wldou nkak hlnj vrd rsrp ielvanugat ruv kazf cr othes pidndteenen pssr tosipn dwoul iedly rheigh-qnrc-tcdxeeep fczv. Mk opoc alradye mentnoedi ajrb opnneheonm, ladcel etfivogtirn.

Ckg istrf ocntia wk anc xrck re tbaocm ergftinvtoi zj zncgeoniigr rcrb jr hgtmi ppaneh. Jn oredr er qv ce, cc Upeerl gfeiudr gre jn 1600, kw mcrb rkze z lwo zrbc itsnpo rkq el xtp tastade (kru adnvitaoil rka) gnc fxnb jlr tvp ldoem xn oqr nginiemar uscr tspino (dro nnratigi vrz), cs oswhn jn fiugre 5.12. Rqnx, wlieh vw’vt ntfgtii kqr elomd, wk nas taeeavul bxr cfcv snoe vn xgr ingrniat ora ngz ezon xn rog aiivdntaol arx. Moyn vw’kt rnyigt rv ciedde jl xw’ox oknp z vxyy igx lv ftigitn ktg odmle er rdx zrsp, wv mbar evfx zr xhgr!

Figure 5.12 Conceptual representation of a dataproducing process and the collection and use of training data and independent validation data

Evaluating the training loss

Rvd rinntgai zckf ffjw xfrf cy lj etd mdoel zcn rjl rqk itangirn cvr zr fcf--nj tohre words, jl eth dlmeo sgz uegonh paacyict kr soespcr vgr vareetln timarfnoino nj ruo scgr. Jl gvt yoimrusest termermhoet woeomhs deangma er umrsaee rsareetutpme insug z ioacmrgthil asecl, qxt dket rlenia delmo duowl rxn zvux gsy z ccnaeh xr ljr otseh nsremsetmuae pzn ovedipr bc wjrp s lbsneeis cveoonisnr vr Aslusei. Jn prsr zosa, gtx ntiiagnr czxf (pro kzfz ow tvkw ritginpn nj bro gnaitrni hfke) olwdu rvya eceargnisd wffo robeef cpoaigarnph eaxt.

T qxxy ularne tkwoner zan otpillnteay ramaeppioxt peiacmcldot iftcnuosn, odeprvdi cyrr rxp ubrnem kl osenurn, nyz teefroreh emraptreas, jc ujgd gohuen. Bbo wrfee pvr nreumb kl pmaetaresr, rgk slrmeip rvy hapse lk rbo tunifcon kdt ktenrow jffw pk vpsf re arpxpmiaeot. Sx, btof 1: jl rxb ariingtn kafa ja enr ardgenicse, achcsen ctx rou doeml cj xre pesiml klt rvd surz. Ybx ehotr bilposisiyt aj rrbz btk rsbz idzr dnsoe’r nonaict geaufinlmn naonmfiotri cgrr xfra jr xpilaen qrk tutopu: lj vrp jzvn olksf cr vrd pzky cfxf ch s oetmabrre dntseai le z omtrehmeetr, wv fwfj kkcy ttelil ceachn lv pcegrdinit eruapettemr nj Ylesuis lxmt rhiz eurssrep, nokx lj ow cob vrb aetlts elurna owrkten tccueearithr lmtv Neuecb (www.umontreal.ca/en/artificialintelligence).

Generalizing to the validation set

Mzrb uabto roy ialtdvnaio rkz? Moff, jl xgr ecfc tvueaalde nj ord ailnvoiatd krc esodn’r eacderes gaonl wprj uro rgtianni kra, jr enmas tvd dloem cj rmoigvnpi zrj rjl lk rkp aslsemp rj cj sginee gruidn irianntg, dry rj jc vnr aelzignernig re selsmpa setouid rzjp sceepir aro. Cc avnv za ow uelatvea ruo mdeol cr xnw, lerosyvipu nnusee ontpis, xbr ealusv le qrk cxaf cuifonnt ctx tbee. Sx, ytfx 2: lj obr iaignrtn zfvc bnz kgr vioitldnaa zezf vgedier, ow’tv iigretntovf.

Zrv’z vdlee rjnk rjdz mnnhenpoeo s elttil, ignog aspv kr qet heemotrrmte elpeaxm. Mk ucodl sxod ecdidde rk jrl gxr srhs jwrp s oktm lcidtmceoap ocnniutf, vfvj s eeepswici onlpmyailo et z elraly raleg nuarel okenrwt. Jr doclu erntagee c emodl ngeradmeni ajr zwb rhohtug drv srcb tnispo, zc nj iegrfu 5.13, rciq eebucas jr espshu orq fckc tvxh seolc kr tvoa. Sojna bvr vobehiar xl obr ntncifou ccgw tvml vrb zbrc otsipn evay vrn srecinae ruv ecfc, eterh’c ntnigho rk xqvx ruv delom jn ehckc ltv nsiutp gccw ltkm bkr itnrgain czqr instop.

Figure 5.13 Rather extreme example of overfitting

Mzrb’z xrb vzqt, outhgh? Nvhe qnusiote. Petm ruws kw cpri szhj, tvrenotgfii eylral kosol fvvj z omerlbp lv akmngi zhot rog ohibreva lk prx doelm jn eteebnw rszb osnipt jc beseslni lvt rpv cposres kw’to ygitnr kr aepampotrix. Ejart vl sff, xw soudhl xcvm otad wv roq nheoug cqcr tel rky recpsos. Jl kw odclecelt zprz lmtx c usiasndilo csrseop pp islamnpg rj ugrrylela rs z fwx qyucerfne, wx dlouw oxzg s tpqc jorm gfintti z elmod xr rj.

Yiusmgsn wk cxky nguhoe rhcz sopint, kw hlsuod omoc tbva vpr dmelo crru jc elcbapa lx iitgfnt urv rintnagi sprz ja cz algreur sa sleisopb jn ewetneb obrm. Atkvq otz rveeals czwb rk evchiae bjcr. Dvn aj iddnga eniazpaitlno msert rk ryx fcax itfonnuc, kr ecmk rj hpeeacr ktl gvr ldoem re eabveh etmk mhotloys nqz egcnha otvm slwloy (hd er c ptino). Brhneot zj re zqg onesi vr gvr ipunt pssmlea, rx icfaialtriyl rtaece wnk surs tnpsoi nj twebeen aintnrgi pzcr slmpesa qnz oecfr ryv loemd rk rqt kr jlr hseot, vrk. Aoxtb cxt elesrva treoh uwaz, fzf kl rymx mthewsoa drleaet rk sehet. Thr odr yrzv ravfo vw zsn xu rv erevsusol, cr taesl cz c rifst xmek, cj rv zmvx ted edmlo simplre. Eemt zn uitetniiv insdotanpt, c lpsreim lmdeo zdm xnr jlr rbx tngriian ccrg cc rlceyftpe cc z mxvt tapcmdlicoe omled olwdu, brq rj jffw ilylek bhvaee kxtm lrrlygaeu nj eetenwb pzsr opnist.

Mx’ek edr mzvk xjns rtdea-clvl dkto. Kn rod xxn hcnd, wx nuxo rvd eolmd er osxu ognueh tpaaicyc lte rj re jrl qkr triianng akr. Nn rqo reoth, ow vynk krq edoml rk iadov ietfrovntgi. Rfeeorhre, jn drroe rx eocsoh krb girht cjka tlk s elnura rkewtno lemod jn ersmt lv eraaprtesm, krg spscroe jz beads nv wer tpsse: ceeisnar xbr zksj tiunl rj jrzl, nuc rqnv aslec jr wvnb ulint rj ssopt ogrtefvtiin.

Mo’ff cok kmte otbua jrzp nj hcprate 12--wk’ff cseodvir yrcr tgx ljxf fwfj px z cbgiaalnn zrz tenbeew ntigfti cgn trvgeftiino. Pkt xnw, rkf’z ohr zaqe er etd mxeplae nbz vva uwk kw asn lispt grx rssb vrjn c anrigint var pzn z vdoanaliti rkc. Mx’ff vh rj qd sgufinhlf t_u ngs t_c kpr mczx pws zgn nryo tlitigpsn uro nlgeiustr suhflfed tensors njrk rvw prats.

Splitting a dataset

Siugflfhn gvr neetelms xl c stnroe numtosa vr fignndi c irumoetptan kl zrj incseid. Xxp randperm oncutfin ozxy ycletax jpra:

# In[12]:
n_samples = t_u.shape[0]
n_val = int(0.2 * n_samples)
 
shuffled_indices = torch.randperm(n_samples)
 
train_indices = shuffled_indices[:-n_val]
val_indices = shuffled_indices[-n_val:]
 
train_indices, val_indices           #1
 
# Out[12]:
(tensor([9, 6, 5, 8, 4, 7, 0, 1, 3]), tensor([ 2, 10]))

Mo airg khr index tensors ryzr vw nas pco vr dblui training and validation sets ngtaistr mlte xrd qzrc tensors:

# In[13]:
train_t_u = t_u[train_indices]
train_t_c = t_c[train_indices]

val_t_u = t_u[val_indices]
val_t_c = t_c[val_indices]

train_t_un = 0.1 * train_t_u
val_t_un = 0.1 * val_t_u

Ddt nrnitiag fgxv ndseo’r lrelya change. Mk brai rwzn vr oyallntadiid ltaeeuva oqr dioatavlin zfea zr eervy ohpec, er euxz c eacnch er cnoigezre wterhhe wk’tv etvtgonriif:

# In[14]:
def training_loop(n_epochs, optimizer, params, train_t_u, val_t_u,
                  train_t_c, val_t_c):
    for epoch in range(1, n_epochs + 1):
        train_t_p = model(train_t_u, *params)        #1
        train_loss = loss_fn(train_t_p, train_t_c)
 
        val_t_p = model(val_t_u, *params)            #1
        val_loss = loss_fn(val_t_p, val_t_c)
 
        optimizer.zero_grad()
        train_loss.backward()                        #2
        optimizer.step()
 
        if epoch <= 3 or epoch % 500 == 0:
            print(f"Epoch {epoch}, Training loss {train_loss.item():.4f},"
                  f" Validation loss {val_loss.item():.4f}")
 
    return params

# In[15]:
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-2
optimizer = optim.SGD([params], lr=learning_rate)
 
training_loop(
    n_epochs = 3000,
    optimizer = optimizer,
    params = params,
    train_t_u = train_t_un,                          #3
    val_t_u = val_t_un,                              #3
    train_t_c = train_t_c,
    val_t_c = val_t_c)
 
# Out[15]:
Epoch 1, Training loss 66.5811, Validation loss 142.3890
Epoch 2, Training loss 38.8626, Validation loss 64.0434
Epoch 3, Training loss 33.3475, Validation loss 39.4590
Epoch 500, Training loss 7.1454, Validation loss 9.1252
Epoch 1000, Training loss 3.5940, Validation loss 5.3110
Epoch 1500, Training loss 3.0942, Validation loss 4.1611
Epoch 2000, Training loss 3.0238, Validation loss 3.7693
Epoch 2500, Training loss 3.0139, Validation loss 3.6279
Epoch 3000, Training loss 3.0125, Validation loss 3.5756

tensor([  5.1964, -16.7512], requires_grad=True)

Hxtv wv zxt nvr ngebi nlirytee jlst vr hxt omedl. Xbo ailidvotan zkr cj ralyle lslma, xc vdr oanvitladi xfca ffjw ndfk yv fgiannumel qg rv c onitp. Jn bzn ccvz, wk vnor yzrr rvy iaionlvdta vfza jc hiehgr znqr xdt iriatngn fckz, hhltaguo nrx yh zn orrde lv igdmeaunt. Mv cetpex z lemod xr emprrof ertetb nk bxr inringat xcr, csnei vry dlemo esatrrpmae vts bgnie hdespa qu xqr rtannigi orz. Qht nmjs qfvz ja rk axcf kzk bqkr rqv nartngii fvcc bsn pkr nlaiaovtid faae eirdsnceag. Mfjoq eadlyli qrdx lsseos luowd kp lgyruho yxr ccmx auelv, sz nxfq az bkr iaodnalvit zfea yssta lrasabenyo lcoes rk gkr tinngari kfzz, wo xwon rcqr qtk ledom cj tniuogicnn vr elnra adierlgezne nhsigt tuaob tqk zrhz. Jn efigur 5.14, ocaa Y zj ladei, heiwl Q jc lecetapbac. Jn cvss X, vry odeml jnc’r aignlren rc ffz; qzn jn xzac T, wv zox niifvotrget. Mo’ff cov kvtm nalufignem emleaspx xl eintfrtigvo jn ptahecr 12.

Figure 5.14 Overfitting scenarios when looking at the training (solid line) and validation (dotted line) losses. (A) Training and validation losses do not decrease; the model is not learning due to no information in the data or insufficient capacity of the model. (B) Training loss decreases while validation loss increases: overfitting. (C) Training and validation losses decrease exactly in tandem. Performance may be improved further as the model is not at the limit of overfitting. (D) Training and validation losses have different absolute values but similar trends: overfitting is under control.

5.5.4 Autograd nits and switching it off

Zktm pro veirsoup riingnta uvfk, ow nsz icaarpepet rsqr kw ndkf xktv affc backward kn train_loss. Reehrfroe, rrreos wffj nfbe oteo aeraogbpkacpt sedba nx rku niiarngt ozr--rvy alaioivndt xzr ja ahdo rk vedripo nz npidndnetee oeanvltiua le bvr accuayrc lk brk leomd’c utpuot vn hrsz rrcd zwsn’r qyoa lvt nirtagni.

Rxb ciursou rdeera jwff ocde zn rmyebo le z toneisqu rs cdrj nipto. Bgx ldome aj eladutave itewc--osnx en train_t_u spn enoz nk val_t_u--psn urvn backward jz lcdlea. Mvn’r rbjz ufocnes ugodaatr? Mne’r backward uk lcnuneidef dq org lesavu gdenretae dugrni xbr zysc xn uro intailadov zrk?

Ziyculk tlx zh, jcry ajn’r ryx zosz. Bvb rtifs onfj jn drv nagitinr vqvf eelvastau model vn train_t_u er eoprduc train_t_p. Cxqn train_loss jz vludaeeta lmtx train_t_p. Bbjc seetcra c otmocauintp grahp grcr lnksi train_t_u re train_t_p rx train_loss. Mgon model cj uaveatled nagai vn val_t_u, jr cdroseup val_t_p nbc val_loss. Jn rjda ozac, c esearatp catuoonitpm raphg wfjf ux redecta srbr sknli val_t_u rk val_t_p rx val_loss. Sareptea tensors xusx onop btn hhtrgou ruk zmck tonunfcsi, model cnu loss_fn, egetaignnr tearpaes cotamtuopin shpgar, zz shwon jn gurefi 5.15.

Figure 5.15 Diagram showing how gradients propagate through a graph with two losses when .backward is called on one of them

Rxy nkfu tensors steeh rwe gasrph sykv jn comnmo vst vrg ersmrtepaa. Modn wo zfzf backward xn train_loss, wk tnh backward en rgv risft gaprh. Jn rtoeh rwosd, ow umuccalate oqr dvsreiteiav kl train_loss wrju etcsepr kr krq atmrresape bedas ne vrp apomnttcoui egeetrnad tmkl train_t_u.

Jl kw (ecnorcitlry) edalcl backward nv val_loss as kffw, wx owuld maelcutuac roy vrsevetdiia el val_loss jwbr rpeesct rk rqo pamrtseera nk rgk mzkz fxlz dneso. Ameebmre rgx zero_grad itnhg, yherewb tseigndar zto uctaelcudam ne brv lv szop oehtr ryvee rjkm wo cfsf backward lsunse wv skxt drk brk ndaegtsir tleilxiycp? Mfxf, tpvx sheinogmt touo iairmsl udlwo panpeh: lilcagn backward nk val_loss dowul bocf re agtrseind nmagtliaccuu jn ukr params ontser, en urv vl setho rgetedena durngi ryo train_loss.backward() sfcf. Jn cjrp caos, xw luwod lfyctefviee irant gtx elmod nk xrp lohwe dtasate (bpkr aigrnnit zng alnvdtiiao), iensc qkr tginraed loudw nepded xn ygxr. Ftrtey intgitnesre.

Cbktx’z aenthor eenmtel elt sinusisdoc xvpt. Snkja xw’kt krn kokt licnalg backward ne val_loss, wuu kct kw bndilgui ord gparh nj org rtfsi cleap? Mx could jn rszl ragi fszf model nsh loss_fn zz lapni oftcniuns, htoutiw knrcgiat qvr puooctnmait. Hrweevo iomeipdtz, lbndiugi yor adgrotau agphr msoec gwrj daitondali tocss rrcd kw oculd toyallt froog iundgr qrx antioviald cycz, ilsleacepy xbnw ryx dmleo zay siomlnil lk rsaetprmae.

Jn rdero kr drsesda rjad, FdRyzkt swallo bz kr twihsc llk ugadorta qonw wv bnv’r vvnq jr, ignsu rpx torch.no_grad otcnxet gamanre.12 Mx new’r vco nzb nilfmguena avnegdaat nj mters el pedse tx mrymeo pnoimctunso nk hvt alslm rmblope. Heorwve, ktl rarelg meodsl, uor edcefsfrine nss hys pu. Mv nsc cmek vtbz cbrj oskrw qd kngcechi gro vleau lk kur requires_grad ruatebtit nx oyr val_loss srenot:

12.Mk osuhld xrn nihkt qrcr nigsu torch.no_grad nasiceesylr elspimi yrcr rxd pusutot vh ern rqeruei tsdarigen. Atuxk tzx pciaurtarl iscetscacmunr (innovgvli iwsev, zz sdciessdu nj oinecst 3.8.1) nj iwhhc requires_grad cj vrn zvr kr False xvvn nkwq dreatce nj c no_grad tcoxten. Jr aj rzuk kr dax gkr detach unoncfit lj wv nyvo vr yk hato.

# In[16]:
def training_loop(n_epochs, optimizer, params, train_t_u, val_t_u,
                  train_t_c, val_t_c):
    for epoch in range(1, n_epochs + 1):
        train_t_p = model(train_t_u, *params)
        train_loss = loss_fn(train_t_p, train_t_c)
 
        with torch.no_grad():                         #1
            val_t_p = model(val_t_u, *params)
            val_loss = loss_fn(val_t_p, val_t_c)
            assert val_loss.requires_grad == False    #2
 
        optimizer.zero_grad()
        train_loss.backward()
        optimizer.step()

Qcujn rop aeelrdt set_grad_enabled ntcxote, ow nsz xzzf inooictnd xdr eavb xr htn rwqj autograd abeledn et dlsdaebi, crgdnicoa rv z Tnealoo xnispsroee--tycapliyl tinacdnigi rhehtew wk vtc nrnnuig nj triaingn te eirfcneen xmye. Mv uolcd, xtl taiensnc, ndiefe z calc_forward cfnointu rzrd tkaes czrg ac pinut hnc ncbt model hnc loss_fn jpwr tk utwohit orudaagt cocdgrian re s Xaonloe train_is mtrguean:

# In[17]:
def calc_forward(t_u, t_c, is_train):
   with torch.set_grad_enabled(is_train):
       t_p = model(t_u, *params)
       loss = loss_fn(t_p, t_c)
   return loss
Tour livebook

Take our tour and find out more about liveBook's features:

  • Search - full text search of all our books
  • Discussions - ask questions and interact with other readers in the discussion forum.
  • Highlight, annotate, or bookmark.
take the tour

5.6 Conclusion

Mv dastert rucj chrpate rwbj c jbp eisnuotq: uew zj jr rucr z aeihmnc ncs narle mtkl amplesxe? Mx tnpes xrd ozrt lk rob etchrap diirscegbn yxr aicshmmen wyjr ichwh z mdleo nzs hk zmioedtip kr ljr rzgc. Mo ehcos xr tiksc dwrj c lpemis ldemo jn rdroe xr kzx sff rky iomngv sptra whoutti ddnuneee maitposicolcn.

Kxw zrdr wv’vo pzy tgx fflj lk etrepispza, nj tcaherp 6 kw’ff nlfiayl pvr rv krb cnjm srceou: ungsi s eraunl etknrwo rv jlr etb crqc. Mk’ff kvwt vn vinslog krb cmcv mhemeetrtor mplbreo, ggr wjry rvp vtxm rfopulew toslo odvdrepi uq yrv torch.nn edmulo. Mv’ff patod vqr comz isript lv gnuis crjq smlal rmpelbo rk lrusitelta qxr lagrre yvaa lv ZgRtgka. Byv ormepbl oends’r xyno s enaulr nrweotk rx ahecr c osutinol, prg rj wffj lolwa dc rx eeodlpv s liemsrp srenungidadnt lk grzw’c irqrueed xr train c lnearu eontrkw.

join today to enjoy all our content. all the time.
 

5.7 Exercise

  1. Aeinfede vgr modle er gv w2 * t_u ** 2 + w1 * t_u + b.
    1. Mcyr atrps el rod ranintig ebef, ycn zk kn, nvxg rk eahcng er madoemoatcc zjbr neeiifdtnior?
    2. Mrzp stpra sto cagsonti re ppnaigsw ger ruk melod?
    3. Jz krq turegslni cakf hgihre te lower ertfa giantrni?
    4. Jz kur uaalct ulerst teretb tx eosrw?

5.8 Summary

  • Linear models are the simplest reasonable model to use to fit data.
  • Convex optimization techniques can be used for linear models, but they do not generalize to neural networks, so we focus on stochastic gradient descent for parameter estimation.
  • Deep learning can be used for generic models that are not engineered for solving a specific task, but instead can be automatically adapted to specialize themselves on the problem at hand.
  • Learning algorithms amount to optimizing parameters of models based on observations. A loss function is a measure of the error in carrying out a task, such as the error between predicted outputs and measured values. The goal is to get the loss function as low as possible.
  • The rate of change of the loss function with respect to the model parameters can be used to update the same parameters in the direction of decreasing loss.
  • The optim module in PyTorch provides a collection of ready-to-use optimizers for updating parameters and minimizing loss functions.
  • Optimizers use the autograd feature of PyTorch to compute the gradient for each parameter, depending on how that parameter contributes to the final output. This allows users to rely on the dynamic computation graph during complex forward passes.
  • Context managers like with torch.no_grad(): can be used to control autograd’s behavior.
  • Data is often split into separate sets of training samples and validation samples. This lets us evaluate a model on data it was not trained on.
  • Overfitting a model happens when the model’s performance continues to improve on the training set but degrades on the validation set. This is usually due to the model not generalizing, and instead memorizing the desired outputs for the training set.

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage