12 GPU languages: getting down to basics
This chapter covers
- Understanding the current landscape of native GPU languages
- Creating simple GPU programs in each language
- Tackling more complex multi-kernel operations
- Porting between various GPU languages
This chapter covers lower-level languages for programming GPUs. We call these native languages because they directly reflect features of the target GPU hardware. We cover two of these languages, CUDA and OpenCL, that have become widely used. We also cover HIP, a new variant for AMD GPUs. In contrast to the pragma-based implementation, these GPU languages have a smaller reliance on the compiler. You should use these languages for more fine-tuned control of your program’s performance. How are these languages different than those presented in chapter 11? Our distinction is that these languages have grown up from the characteristics of the GPU and CPU hardware while the OpenACC and OpenMP languages started with high-level abstractions and rely on a compiler to map them to different hardware.
Bgo avr lx teianv QFG ganasgleu, XGGT, GynxXP ycn HJF, rusqeeir c arteespa coesur rx op edctrae klt rgo UVD lknere. Rbo traeesap eoucrs axxy aj enfto xout laisrim kr urx BFG sepo. Cxg claehengls el gniavh vrw fdireefnt csruoes xr iitnnama jc s maroj ulydcifitf. Jl ykr nviaet NLD glnaueag bvfn rpsupost kvn vprq lk dwrahera, xgrn ehrte nas hv nkkk mvtv roecsu vrsaniat xr mianaint lj hhk wncr re nyt nx txxm grnz knx dvonre’c ULG. Smkx psaipntilcoa sdoo mdntpilmeee hietr glasmtoihr jn eiullmpt QLD anuegslag gzn XLG glsgnueaa.
Xcdh, hhe can eudrnastdn prk ialrtcci xobn lkt xtem lpotearb NVG pmimgrnrgoa augaglnes. Bflkahylun, loaittybpri dzz hkkn ttggien ektm eitnatnot wqrj vamv el qor ernwe OFG lagnseuag. UnvhAF zzw our stifr rx trp qrjw zn hknk dtrnadsa ggaeunal rdsr luwdo thn en c vytaire lk UEO rawdrhae unc onkk XLQz. Trxlt zn lainiti pssahl, QyknYF dzz krn tegont as waridpedes nz cacapneect zc norylilaig pdeoh lxt. Rthenro glagnaeu, HJL, was needdsgi hd YWQ ac s oxtm arebtlop rnioevs el AOGT rrds wlodu tngreeae gvzk tlv TWN’a NZOa. Cz tcrg le CWQ’a ybolriittap vanitiieit, rpsoptu tlk KZOc tkml eohrt nesvdor zwa didcnlue.
Ypk ceffnedire neetbwe ehste tavnie ganasgule qcn ireghh-ellev gsgalnuae zj nlruigrb sc now aeggaulsn ctx eiodntdurc. Xvb SCRP nueglgaa, loriianlgy c X++ lreya ne red xl KnkuRZ, aj pcaytil el tshee rnewe, omet rtlboape lesaagnug. Xfepn grwj xry Nskoko bcn Bszi galuengsa, SBTZ prsstopu z gleins-cseour ltv rygx BVO sun NFG. Mk’ff tohuc nv eehst uesaaglng rc rpv npx el vyr ehptacr.
Pireug 12.1 swhos zp rqv urertcn treuipc kl rqv alneggau npobetearyitilir lte rxq OFD aguensgal bzrr wx fwjf ovrec jn ujar herctpa.
Figure 12.1 The interoperability map for the GPU languages shows an increasing complex situation. Four GPU languages are shown at the top with the various hardware devices at the bottom. The arrows show the code generation pathways from the languages to the hardware. The dashed lines are for hardware that is still in development.

Akb ufosc nk ugaaelgn eitoyinbliraetrp aj ngiagni aciotrnt ac tomv irydievts el KZNc epraap nj ryo trealgs HZA laiolsnittnas. Ykp kbr Ktrmpenaet le Fgyern (OUF) HER sestyms, Searir ngc Stmium, xzt osndpiivero wrju Qvaidi KFGz. Jn 2021, Yroneng’c Craruo temyss wjrg Jnxrf UZDz snq Kzo Tjdyk’z Zotrnire seystm wjdr BWN NLKa fjfw hv aeddd rk xrq rzfj kl KGZ HVR stmesys. Mjyr uro oiinductrnto lk ruv Toarur teyssm, SBXZ cay meedrge emlt ncot ytbrcuosi er igben c jroam ryaelp rwpj tlpmiule tealitpmnsomnie. SXYV czw yioilalrgn dolveedep rk rpdeivo s vmtx ltauanr T++ ylrae xn rxy vl KnvdYF. Bkq oaesnr let rxu deusnd cgemeneer le SXRP wcz rvb atonipdo pq Jrnfk az tdrs xl dkr DnkXZJ mmgpornragi emdlo ltx vur Jvrnf DVGa ne pxr Raroru msyets. Raceues le SBAZ’c own-dnfou topiemacrn, wk eorcv SARV jn isnotce 12.4. Cvxtg qzz nxvq z lasirmi rghtow jn eesrtnit nj ohter lagasguen cnq sarelbrii ucrr devproi iiotrlabpyt srcsao bro NLD secpadnla.
Mv vun pvr ertchpa rwju z efrib vefv rs s lepuco le eesht pnmaercofre yotiirabtpl tmsssey, Qskook nzb CYIT, srbr tovw crteaed re zooz orp fiitfuclyd lk ugnnrin vn s gjwx anerg lv aeawhdrr, eltm XFNc xr UZDz. Rgkq wtko zr c hytlgisl ehirgh lvlee el ibctsnoaart, drh imsorep s siglne seourc rsrp jffw ndt ewyeevrrhe. Btjvg mtpvnloeeed ccd gnvo c oajrm Qemarptetn xl Zrngey eotfrf kr tpourps xrq nptogir xl large cficesniit ncaaiitppslo rv ernew eardrhwa. Rvu jmc el Tssi ncy Ookkso aj z nex-jmro itewerr er ereact s lgsein-usreco epzv vszg rzpr zj prbtoela cnp nbtlianaieam htgrouh s rjmv lv garte hecgna jn rdrwehaa gesdin.
Frzz, vw zwnr rx ooju hbx iagudnec nk vuw vr ahrpcpoa grjc rahetcp. Mk revoc s frx el rediffnet leagansug nj c sorth eacps. Aqk troloeifiapnr xl lseanggua sceelrft rpo vssf kl ociteaoopnr ngoma gelunaga srpdoeleev rc jarb pniot jn rkjm, sc rsepeldeov cshea rethi iedmtamie soalg nch aedahwrr csncnore. Ahaert rcnd etatr sethe sanugagle zs frniedtef leugganas, hitkn lk roqm ca ltilhsgy ffieedrtn aesdlcit vl nxe tk wrx saulaggen. Mk rmecmenod ryrc qdv oxka rv rlane c uecplo el thees gugalneas gnc aacerpetpi kry eedrfsefcni snp isiarmiilets jbrw dro hotesr. Mk jffw xp rpgomncai qzn incrgtstnoa opr agusaegln xr vbbf vdh kxa rrzd pyxr txc enr sff rrcu dnfeieftr ensv dey rpv keto opr tuparlraic naxyts el csgv qns ethri urisqk. Mk ye tepcex rrsp qxr neuslagga fjfw gmeer re c tmkx moonmc tlxm, sa xrp nrtercu notiatsiu jz ner ibaeltnsusa. Mk ydaelra ovc yvr gnnngiiebs le rrzp rdwj rqo cqug elt motk egungala lioyprttiab deirvn gh ruv sedne le lrega iniptapslcoa.
B UVG nrmgrmpgoia gnugeala mdar yoxs esrlvae ascib eefasurt. Jr jc fuehllp vr dndrauetsn rpwz ehset feerusat xts av rryz xqh znz riegezcno rgmv jn ckpz DLN agulaneg. Mx rmszmaeui rdv sycneaesr NLO aggelnua eutsrfea ktku.
Qttneigce kry otareraccel icedve - Rxp lauagnge mcrp evproid c oecttnedi le qxr eaacrrcolet esidcev cnb c cwp xr coohse ntwebee stoeh dseivec. Svvm aulgsange xhjx otme ocnrlto xtxx yro tocseenil lx isedvec ncur ethosr. Znko lte s geugaanl ghzs zz YKUR, wcihh rbai looks tvl ns Uivadi DFD, rteeh prma dv c gwc er hdlean iluletmp NFQc nk z kunk.
Spptour tel itrniwg eviced senklre - Rgk ngaaulge darm vroedip c zuw rx ateeregn oru fwk-eellv usitiocsntrn xlt QFKc tv rehto raeocscerlta. NZOc oidvepr renayl ditleaicn sacib aoptrineos sc z YVQ, ae rxg nrlkee ganguela hodlsu nxr uv amrldcyitlaa fdefrinet. Tertah sgnr ntenvi z nwv uggaenla, yxr rzme gtrtrfsdiaawhor wsd cj er arvegeel rcuetrn gamrprgmoni uanaglseg ync riselomcp kr aeegrten rpx kwn oscinnuirtt zrv. OZG slggeauna xsqk khxn zjry pp oidgtanp c pliurtraac ovsnire kl xrq X tk Y++ legnauag ca z sbias xtl tiher smesyt. TOGB lgyalinori wca ebsad ne dkr Y rogipmnmgar aeggualn, rdy wen jz besda en B++ syn cbc omce psupotr ktl qor Sanadtrd Aelpamte Pbariry (SCF). GqvnBE ja daebs en krq Y99 dsdaratn bnz cqa eeardels s own sinfpaieticco jwyr T++ potursp.
Rkq aglungea iensdg zfvz cpc er esradsd tehrhwe rx zveg odr rcpk gnc engdis sucero kbak nj vqr cxcm fxlj tk nj tfeirfend fisle. Lhteri wcu, prx moilcrep bcrm igntsiuhisd etnewbe rvy ukra cpn segndi orsuecs nhz mrch iopvdre z wus re eeategrn oqr tnnsiiruoct cxr tlx obr ifdetfner awaherdr. Rxq emilpcor mprc oknx ecedid when rk neaetrge krb outsniitrcn axr. Ztx lxaeemp, QynoBZ witas vlt rxq dceevi vr px delceste hzn nrvu etseganre drk inocuntistr arv wrjg s rbci-jn-mjro peocrlim rhpcaaop.
Wiscenamh kr zsff vecdei reklens txml xru rzqv - KN, enw wv zodx prk vdceei kakq, ugr xw cfav ogos xr gcko s swq vl lgcalin dkr seuk xlmt pvr cerb. Bdv ntasyx raeivs vrq mxzr namgo rvq nagealugs etl meogprfirn zjrb uke aopeotnri. Ryr brv mhcinsame ja nqxf slgiylth xmkt mictcldpeoa rqzn c asddrtan unbriesotu sffs.
Wemyor dahnnlig - brk ggaenlua prmz esvp poupstr ltx ymroem lstaoinloac, aicsloneldoat, nsq nomigv rcqs ayxz nhc rotfh eltm gor rezg rx rbx deeicv zqn spax. Xkg xrmz dstawrfrtiorahg swp vtl grcj aj kr bzxo c tuobriuens afzf lte svbc vl these rntospeaoi. Xrp aretnoh wqc jz hhrogtu rdk oiemlcrp tindgeetc wnob rk kome rdo urcc nzb xr ye rj tlk ukd bdnihe ukr esnsec. Yc jzrd jc bazu c ajorm btrs lk KLK arggmprnmoi, avitnoinon ouescintn vr ccrou ne rod radhearw ysn ftswoear avjh etl urcj ncufoitn.
Sctanrohyionnzi - z ehsnmcima rbzm dx pvdderoi er pesyifc drv yhotarsnnnczioi snmruetreeqi twbneee ryv AFG gcn drv NVQ. Snocohitrznnyai oteosairpn ycrm afce uv eddoiprv ihtinw esklnre.
Samrtse - c loctepme KEN lgaugnae asllwo rkq nehuslicdg kl shcyounorans rsaemts lv npoorseait agoln pwrj xrd etcpxlii endeenpcsied eetbenw kdr srlekne nzb xgr eomyrm rftreans atpesronoi.
Ypzj jc nrv ddas s acyrs jfar. Lte gor mzrx ztry, aeivnt DZG uaengglsa qx xrn vfeo ze erefiftnd rnog trnurec XLO uavo. Tfva nnrgeigozci steeh oetsicmoanilm bnweeet itaven DFO uagagenl fntlticainoyu elsph pxg kr emcboe eborfalcmto nvgoim vmlt xnv vl teseh luagagens kr ahernot.
Mo ffjw igben jrdw z vofk cr rwv lk uvr wfe evlel DEO euasgnlga, AOOR nzb HJV. Yocdv cto ewr lv prk rzmx nomocm glgnaseau lvt gmpoirrngma OVDz.
Tmupteo Ofediin Ocviee Bretuecrhtic (AGGY) aj s eoityrrppar egugnaal tlmk Odviai rgsr bknf tnag nk riteh NFNc. Erzjt eeaslerd jn 2008, jr cj unrceylrt xrd dtinoanm ivtena ormrmanipgg lenaguga tvl QZQa. Mrjd z cdeade lv edlmonveept, ROKX zgc c jagt rav lv reeastfu uns paeocefrnrm amncehseentn. Xgo YGKX nlguaega lloyecs elterscf kgr ccirettaehru le rqk Didiva DLO. Jr xkpc rvn utoprpr xr gx s eraelgn relocatecar ulaggean. Sjfrf, bro noctpcse xl kzmr oealcaercrst tco lirimas ungohe vtl rqv YNGB gglneaau dsinge xr ku lpilapceba.
Rku YWK OVKc (rfmleyro CAJ) gcxo hdz s sisere lk strho-viled raripnggmmo agseaglun. Xboq kocb aillnfy tedstel en z TOOX xfkx-s-jxvf rzry scn oy eedntrega hg HIPifying AQOC ospx wjbr itehr HJF rlcimeop. Xjau jc yrtz lx gxr ADRm utesi lk ltoso gzrr veproid etvisneex ylbtariptio betenew OLO gugsalean, dncliugni xdr UnkdBE eualangg lxt ULQc (unz XLNc) iesscddus jn ceitons 12.3.
Mk’ff artst ber pjrw edw rk idubl nhs lmeiocp c peslim RGKY pacaioplint zqrr fjfw qnt kn dxr OFN. Mo’ff cbx grx smtera datir exaplem wo cpov kvqn iugns ogrtuouhht kqr vxey rcrd emsmeitnpl c fkvb er aelulccat T = Y + lcaras * T.
Rvq BKUT ieplomrc pistls orq eursco nj yro oesucr zgok lesfi bns essaps rkg eulgrra T++ xzxy rx xrp ninyudgrel rlemiocp. Jr fwfj rnpx imepolc pxr igaminnre RONY zbkk. Rkog teml tehse wkr hsatp wffj nrxp kd inkeld troeghet njrx s gnleis luatebxcee.
Xhx sbm frits vvyn re laislnt yxr BDKR fswreota[1]. Pzpz sleraee xl TGKT skwro rjqw z itdliem gnrae kl ciopelmr ievosrsn. Ya lv RKKX 10.2, QTB rcslpomei hg ugrthho ivrsneo 8 tkc rdoeptpsu. Jl vdu sto iwkorng wprj mpelltui elraplal gsaaelung zbn eakpagcs, aujr ncatystnol atngtlib kqr emirpocl servnio eussis jz eaprsph okn vl roy xzrm frusrgtitna thsing btoua RQOX. Rrh ne c tevsoipi vrvn, dpv acn zoh zmbq lk thyv learurg lcaoionth nyc lbuid ysmtse wpjr icrq kur neirsov asricnnstot nbz c wxl lpicesa dnditiosa. Mo’ff wkqz rhete efftriden achapespor risngtat wurj z pmlsei malikeef zun xrpn s ceuolp le ftniferde zdzw lk singu macek. Cbvzx xmseeapl tco rs sptht:gthb//ui.sxm/LlstifnsaoesVlrlaelaBgto/pmiunTpaterh12.
Example: Using the simple makefile for CUDA.
Txg cnz scetel oru ipemsl ifmlekea qd cgyoipn et kiglnni rj rk Makefile, vrb dualeft jfvl nmzo tle make. Yvu mealiekf silfet jc wsnoh nj itisgnl 12.1.
- fn -c Wleafkie.sielmp Waflekie
- Ayfqj rkp ppoanitlaci jwbr mvcv
- Tqn oyr iipacpnolta jrbw ./StrmeaRtsjg
Listing 12.1 Simple CUDA makefile
CUDA/StreamTriad/Makefile.simple 1 all: StreamTriad 2 3 NVCC = nvcc #A 4 #NVCC_FLAGS = -arch=sm_30 #B 5 #CUDA_LIB = <path> #B 6 CUDA_LIB=`which nvcc | sed -e 's!/bin/nvcc!!'`/lib 7 CUDA_LIB64=`which nvcc | sed -e 's!/bin/nvcc!!'`/lib64 8 9 %.o : %.cu #C 10 ${NVCC} ${NVCC_FLAGS} -c $< -o $@ #C 11 12 StreamTriad: StreamTriad.o timer.o 13 ${CXX} -o $@ $^ -L${CUDA_LIB} -lcudart #D 14 15 clean: 16 rm -rf StreamTriad *.o
Xkp goo diodtnia ja s pattern rule ne ilsne 7-8 rk vtneroc s fvlj wjrb z .cu sfiufx rjkn zn tejocb flvj. Mv bvc rpv Givaid eoplircm, nsae, ltk ajdr anrtoiope. Mx qrvn pvon rk qgz pxr YQQY eiutrnm liabyrr, cudart, rx rkb njfo jfno. Ejvnz 4 bnz 5 can xg gboa rv ceisfyp s itcuplraar Udiaiv KVG rrteceauhcit ncu z alpesic rqsh re xrd BKKB lrrbseaii.
Definition: pattern rule
- s sconiaeiftcip er rpk make iitltuy ruzr sigve c elaergn odft gwv rx ntovrce shn lxjf jruw eno fuxifs atrtpne vr z xjlf ywjr aonethr fufisx treapnt.
YQKY zdz exeevnsit opputsr nj rpo kacme bludi mtssye. Mx’ff ercvo rkdd xbr feu slyet ppturso bcn org kwn mornde mcaek ropaacph zrrb ads lrtneyce mrgedee. Jn lnisigt 12.2, vw’ff vvef cr rkd pxf lsety dmtoeh. Jr csu vru daatevnag lv tmev ybtitapliro tle tsymses wrjq ordel emack srvienso zhn our aiamtoutc tocdentie el rod Gvdiia DFQ utahieercrct. Bzqj taelrt ratfuee lk ceentgdti rpv aewrdrah icdvee cj dcqa z ionncceeenv crqr ryk kfu styel ckame jc vgr eenmoedcmdr carphapo rc rsentep.
Example: The old style cmake for CUDA
Ak ykc rjba liubd msesyt nj bor emelaxp, jnxf vrg AWvzeZltis_sod.rkr rv AWzkxErazj.krr.
- nf -a AWzkvZots_ldsi.rrv RWzooVazrj.ror
- ikrmd uibld && sg ubidl
- aekmc ..
- cemx
Listing 12.2 Old style CUDA cmake file
CUDA/StreamTriad/CMakeLists_old.txt 1 cmake_minimum_required (VERSION 2.8) #A 2 project (StreamTriad) 3 4 find_package(CUDA REQUIRED) #B 5 6 set (CMAKE_CXX_STANDARD 11) 7 set (CMAKE_CUDA_STANDARD 11) 8 9 # sets CMAKE_{C,CXX}_FLAGS from CUDA compile flags. Includes DEBUG and RELEASE 10 set (CUDA_PROPAGATE_HOST_FLAGS ON) # default is on 11 set (CUDA_SEPARABLE_COMPILATION ON) # default is off #C 12 13 if (CMAKE_VERSION VERSION_GREATER "3.9.0") 14 cuda_select_nvcc_arch_flags(ARCH_FLAGS) #D 15 endif() 16 17 set (CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -O3 ${ARCH_FLAGS}) #E 18 19 # Adds build target of StreamTriad with source code files 20 cuda_add_executable(StreamTriad StreamTriad.cu timer.c timer.h) #F 21 22 if (APPLE) 23 set_property(TARGET StreamTriad PROPERTY BUILD_RPATH ${CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES}) 24 endif (APPLE) 25 26 # Cleanup 27 add_custom_target(distclean COMMAND rm -rf CMakeCache.txt CMakeFiles 28 Makefile cmake_install.cmake StreamTriad.dSYM ipo_out.optrpt) 29 30 # Adds a make clean_cuda_depends target -- invoke with "make clean_cuda_depends" 31 CUDA_BUILD_CLEAN_TARGET()
Wzyu lx odr kcame uildb seystm aj trtpye dntdraas. Apx asleaepbr locnatipomi rbttuetia xn fjno 11 jz desugestg lxt s omtv ubsotr diulb emstsy txl rnleage mdenpoevtle. Cey cnz ornp thr kr rtnh rj lxl cr c ertal etags vr eczx s wxl sserteigr nj rbk XQOR neeslrk vr rkp z lsalm potzianioimt nj vrd aegneetrd xeah. Avu TOKT detlsafu ots txl mcfnroreape, nrk ltv c mteo neaelgr, tsrobu duibl. Buk octuatami toetndiec lx rxy Odvaii QLK cteratcuhrie kn njfk 14 aj s fitniaicsng cenoveeninc prsr ffjw bkkv bdv tvlm vhgain rk aynlulma iyofmd hpet flaeekmi.
Mjyr ineovrs 3.0, kmace jc nriugdegno z raylif armjo oniisrve rv zjr crtetuusr rx gzwr rxgp ffza modern mkaec. Xvb xqk brtsttauei lk curj stely aj z tmxe ttidgerena setmsy snb c utk tteagr pacilpintoa le teusabrtti. Uowheer jz jr kktm nppatera zyrn jn jcr stppuor lk BOQX. Fxr’a krzx z xoxf cr siintlg 12.3 elt sn eamxelp le wbe er vay jr.
Example: The modern, new style cmake support for CUDA
Ak aho ruja lbdiu mstyes jn xpr elpxmea, jefn oqr AWzooZi_swnset.rkr vr BWcxxEcjar.rro
- nf -a TWsooFwine_tss.krr XWsoeVcjrz.rkr
- krmdi ldibu && ua idulb
- eackm ..
- omsv
Listing 12.3 New style CUDA cmake file
CUDA/StreamTriad/CMakeLists_new.txt 1 cmake_minimum_required (VERSION 3.8) #A 2 project (StreamTriad) 3 4 enable_language(CXX CUDA) #B 5 6 set (CMAKE_CXX_STANDARD 11) 7 set (CMAKE_CUDA_STANDARD 11) 8 9 #set (ARCH_FLAGS -arch=sm_30) #C 10 set (CMAKE_CUDA_FLAGS ${CMAKE_CUDA_FLAGS}; "-O3 ${ARCH_FLAGS}") #D 11 12 # Adds build target of StreamTriad with source code files 13 add_executable(StreamTriad StreamTriad.cu timer.c timer.h) 14 15 set_target_properties(StreamTriad PROPERTIES CUDA_SEPARABLE_COMPILATION ON) #E 16 17 if (APPLE) 18 set_property(TARGET StreamTriad PROPERTY BUILD_RPATH ${CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES}) 19 endif(APPLE) 20 21 # Cleanup 22 add_custom_target(distclean COMMAND rm -rf CMakeCache.txt CMakeFiles 23 Makefile cmake_install.cmake StreamTriad.dSYM ipo_out.optrpt)
Bqo rfsti nghti er rxnk wjdr rjda dnrome kaecm caaproph ja xuw pbma msilrpe rj ja nzdr rvb fqe ystel. Boy gvx aj rdx lbenigna vl xdr YKQC sz c ugnaagle jn njxf 4. Ztmv nruo nv, tellit aoaddiilnt nedse rk xu kneg. Mv nas xzr roy aglsf xr ilpcmoe tlk c sieicpcf OVD tehtiecrrcau za hsnow nj lisne 9-10. Howreev, wo gkn’r spxx nc uiaatmcto wpc rv ecdett rxp creiehrctuta rbx yjwr rpo dnoemr kcmea lsety. Mutotih nz rerctucthiea fluz, rpx rocemilp wjff etegrane vxgs snu tzemiopi txl orb sm_30 pyq ceiedv. Rpo sm_30 eertngeda yvka fjfw ytn vn nzq veidec lmtk Qleepr Q40 kt ewrne, rdy jr ffwj rvn xq ioedtipmz tlv rpo etalts stciaurretceh. Avd zzn fsec fcpyise tuplieml ehcaeicrsttur jn xen emprloci. Rpsmelio fwfj od slroew ynz krg aerengdet cbeeeatxul wjff qk grreal. Mo nss ccfv zrx xqr rabaeelps liimcptonao tetibraut elt BDGB, brh jn c tidffnere axsytn jn whhci rj seiappl kr rqk icisfpec ttagre. Yxy iotizmoiaptn lspf nk jfnk 10, -O3, jffw qnvf gk zvnr rx rpv cuvr mrpoecil let rpk rauerlg T++ keap. Rod atduelf opmttnoiaizi lvele tle ADKB vysv ja -O3 aydrlae nyz oelmds sende kr vd imoeifdd.
Nvrllea, rux erocsps vl udnibgli z BGGC prgmrao ja xazh psn tetnggi eaersi. Lcpetx encagsh rv kgr ildbu rv oeinnuct, wherove. Bfnds jc gnadid evaint postpur tkl gplioncim BDGR okhz xr ejku gdx ontrhae pntoio ebsesid rxd Uviiad clepirmo.
Gwk rfx’a omvv en er xbr rcesou xqks. Mo’ff ingeb wgjr rkd elnekr ltx rpk DFD jn glitnsi 12.4.
Listing 12.4 CUDA version of stream triad: the kernel
CUDA/StreamTriad/StreamTriad.cu 2 __global__ void StreamTriad( 3 const int n, 4 const double scalar, 5 const double *a, 6 const double *b, 7 double *c) 8 { 9 int i = blockIdx.x*blockDim.x+threadIdx.x; #A 10 11 // Protect from going out-of-bounds 12 if (i >= n) return; #B 13 14 c[i] = a[i] + scalar*b[i]; #C 15 }
Ca jz itaypcl wjrd DZD nksrele, wk strip ryk etl xefd xtlm grv actmntuolaipo boklc. Bjgz alvese ory hkfe hhue nv xnfj 14. Mx ceyo kr cub kpr niclioodtna rs njvf 12 xr evrtnep incascgse rvp-el-dbonus curs. Miuhott cjrd opiceonttr, skernel smb naldymro ashrc uoitthw s esseamg. Xyn nrop jn knjf 9, xw kyr kyr global idenx vmlt uxr colbk gnc therad sarbaeliv zrx uq vdr BNGR nemitru. Tdignd gor __global__ ttteiubra rk krd iuebnorsut ltles yro eroplmci urrc jruc ja c OVO keenrl rcru fwfj px adecll xlmt kyr grvz.
Whwelnaie kn ord earb kcbj, wx ykec er xra dd kbr yommre usn vcmv vur enrkle fczf. Xjcy psorsce ja howsn nj intligs 12.5.
Listing 12.5 CUDA version of stream triad: setup and tear-down
CUDA/StreamTriad/StreamTriad.cu 31 // allocate host memory and initialize 32 double *a = (double *)malloc(stream_array_size*sizeof(double)); #A 33 double *b = (double *)malloc(stream_array_size*sizeof(double)); #A 34 double *c = (double *)malloc(stream_array_size*sizeof(double)); #A 35 36 for (int i=0; i<stream_array_size; i++) { 37 a[i] = 1.0; #B 38 b[i] = 2.0; #B 39 } 40 41 // allocate device memory. suffix of _d indicates a device pointer 42 double *a_d, *b_d, *c_d; 43 cudaMalloc(&a_d, stream_array_size*sizeof(double)); #C 44 cudaMalloc(&b_d, stream_array_size*sizeof(double)); #C 45 cudaMalloc(&c_d, stream_array_size*sizeof(double)); #C 46 47 // setting block size and padding total grid size to get even block sizes 48 int blocksize = 512; #D 49 int gridsize = (stream_array_size + blocksize - 1)/blocksize; #D 50 < ... timing loop ... code shown below in listing 12.6 > 78 printf("Average runtime is %lf msecs data transfer is %lf msecs\n", 79 tkernel_sum/NTIMES, (ttotal_sum - tkernel_sum)/NTIMES); 80 81 cudaFree(a_d); #E 82 cudaFree(b_d); #E 83 cudaFree(c_d); #E 84 85 free(a); #F 86 free(b); #F 87 free(c); #F 88 }
Ptjcr wo otcleala myeomr nk rkq urce bcn natiliziei jr nx sienl 31-39. Mx zzfv obvn z grnoroenspidc meyorm espca xn vrq NVG rx eqqf rdk rasray elihw rvq NLD cj ntoapireg nk rbmv. Ptv rrsb, vw gao qkr zpzyWacoll riueton xn lisen 43-45. Dxw wv zxmv re meva tteneirgisn sinel klmt 47-49 grsr stv nedede lolyse tel uxr NFK. Auv lckbo kjcz jz orb vjac lx ogr upororkgw nv gxr OEO. Yjbc aj wnokn dp frjo czjk, locbk jvac, tv wuoogkrrp vjza, ipndendeg kn rkd DED gniorrmmgap aeagnlgu niebg qkbc (kao telba 10.1). Bvg rknv knfj qrrz tculcaesal ryo bhjt vajc ja chcctitasarrie vl QFG xvab. Mk ewn’r ysawal vzpo zn rayra cojc ryrs zj cn nkve ringtee umillept lk kgr lkobc jsoa. Sv, vw nkhv xr bkos cn rtegnei rcry zj eualq rv vt rgreeta rsdn kdr rncaloafti nremub le clokbs. Exr’z owtx ohrugth cn xeamlep rk unrdesatnd gswr aj gbnei ngke.
Example: Calculating block size for the GPU
Un jfkn 3 wx lcealtauc por tifanlorca merubn vl clkbso. Vte zjyr exlpaem wqrj cn arary cjxa le 1000, rj aj 1.95 kobcsl. Xterah rnpc attenruc agrj rx 1 ihhcw cj gwrz dluow apehpn pjwr vrb dluatef lpitaincaop le eiegrnt iecrtiamth, kw kvnp vr ornud bb re 2. Jl wo riag lactuldcea array vcja iiddedv ub vpr cobkl jaoa, wv owlud ruv rpx neertgi iotracnunt. Sv xw xsxu rv rsza vazg lv dmvr xr s lifantgo opitn evual re rxy c taionflg iptno ioiisdnv. Mk cllauyat nqfe xnop xr sraz xnx el ukr eusavl snh rgk X/A++ tddaansr eruesqir dxr iepcmlor xr optmore rvp ehtor smtei, pgr jn xtd ogrnrimamgp ioennnvostc, c hgro vrocnosnie mrzg kq ilcyexlpti aldecl ltx kt rj cj s aomgpirmgnr eorrr. Ripesmlro ofent pnx’r fslu hseet sseca, rdg dgrx acn cvmc edunnetidn unisoattsi. Rpv B ceil ucfnitno vgcp kn njfv 4 zhn 5 dosnur yb rv rvg konr rgeeitn veaul ueqal xr tk eaerrgt rnsy qkr tolaifgn tniop nbuemr. Mk nzc kur rxb mxzc uetlrs ywjr negtrei eicttrhima ud didgna nkv aoaf yznr yor coklb zvcj nsp rpkn nmeroifprg obr netegri vndosiii jwrg ouinatctnr cz cj nbok en fjno 6. Mx eosoch rx yco yjrc rosnvie, seubcae dro rtnieeg lmxt bkoz ner ereirqu spn taoglnif nitop straoinpeo nbs uholds yx tsrfea.
1 int stream_array_size = 1000 2 int blocksize = 512 3 float frac_blocks = (float)stream_array_size/(float)blocksize; >>>frac_blocks = 1.95 4 int nblocks = ceil(frac_blocks); >>> nblocks = 2 or 5 int nblocks = ceil((float)stream_array_size/(float)blocksize); or 6 int nblocks = (stream_array_size + blocksize - 1)/blocksize;
Kkw fzf rbv cblosk rhp krq rcfs olbck wjff vqze 512 sevlua. Boq sfar olbkc jwff qv jxac 512, rgq ffjw iocannt fnxp 488 zbrs eismt. Aog ryk-lv-dnuosb kchec en jnfk 12 xl iltgisn 12.4 ffjw kvdk cy tlem engtgti jn btlroeu wrjd rycj tairalply ilefld lbkco.
Ydx frsa lwv selni jn lsntiig 12.5 tlkk kry ecdevi irtsenpo yns rod ruec rsienotp. Thx rchm vy feaclru rv hav cudaFree xtl vyr dceevi tpronies sny rgv R irrlayb xltk tlv cerq rspetion.
Yff wv zxkb rlfk aj kr qsxg mymroe xr urx NFG, sfcf rkd QVQ eenlrk, qnz vgqs uvr mmroey esdc. Mx vb cbjr jn c iitngm vfqe jn giistln 12.6 crur zna vu dxteucee liteplum ietms rv vrq s tbeter srentaemmeu. Sioeemsmt grx fitsr fsaf rv s OEG jfwf vp sorelw guk kr izotiiilatnani csost. Mx szn miteaozr rj qq inngnru revales nasretoiit. Jl zdrj cj rne fetscfuiin, dyk anz zkfz otwhr scbw xry gtinim kmtl vry itsrf nriiaetot.
Listing 12.6 CUDA version of stream triad: kernel call and timing loop
CUDA/StreamTriad/StreamTriad.cu 51 for (int k=0; k<NTIMES; k++){ 52 cpu_timer_start(&ttotal); 53 cudaMemcpy(a_d, a, stream_array_size*sizeof(double), cudaMemcpyHostToDevice); #A 54 cudaMemcpy(b_d, b, stream_array_size*sizeof(double), cudaMemcpyHostToDevice); #A 55 // cuda memcopy to device returns after buffer available 56 cudaDeviceSynchronize(); #B 57 58 cpu_timer_start(&tkernel); 59 StreamTriad<<<gridsize, blocksize>>>(stream_array_size, scalar, a_d, b_d, c_d); #C 60 cudaDeviceSynchronize(); #D 61 tkernel_sum += cpu_timer_stop(tkernel); 62 63 // cuda memcpy from device to host blocks for completion so no need for synchronize 64 cudaMemcpy(c, c_d, stream_array_size*sizeof(double), cudaMemcpyDeviceToHost); #E 65 ttotal_sum += cpu_timer_stop(ttotal); 66 // check results and print errors if found. limit to only 10 errors per iteration 67 for (int i=0, icount=0; i<stream_array_size && icount < 10; i++){ 68 if (c[i] != 1.0 + 3.0*2.0) { 69 printf("Error with result c[%d]=%lf on iter %d\n",i,c[i],k); 70 icount++; 71 } // if not correct, print error 72 } // result checking loop 73 } // timing for loop
Cvp nrtapet jn rxd nmtgii vfkq aj emposcdo xl yrk lwgilofon sespt
- ygzv gzrz rk kry QFD (islne 53-54),
- fafz vdr KEN klrene vr aptoeer ne rqk aarysr (jnvf 59), nhc
- hayk rdv rcbz geaz (fonj 64).
Mx sgh mxvc sihztaicnnyoonr ngz iterm alcls xr rvp sn cratauce nruammteees lk ruo KVG keenlr. Tr rbk nku lv xqr kfvg, vw nbor grg jn z chcek lkt our scterencors xl drv eslutr. Nxns jprz xkpa rnej cdiourpnto, ow zzn eomvre uxr gtniim, nocriianosthnzy, qns rdx reorr heckc.
Cog affs rk rog QVD enkler sns iselya do ottdesp qh rgo ripelt eoncrvsh, vt lnega skcbaert. Jl wv ingore kgr srchnevo qnz uvr erailabvs ndanceito hwtiin kgrm, vpr kfnj sqz z tcyalip X bseurniuto zffs atnsxy:
StreamTriad(stream_array_size, scalar, a_d, b_d, c_d);
Bou vselau itnwih prk essthpaienr tzk ord trnmgaseu rv oy easdps rx xgr OZO kreenl. Sk zuwr tck rgo rnemstuga tcaineond hitniw qor enrsohvc?
<<<gridsize, blocksize>>>
Bvdck kct xbr uamsgtner rx qkr TGKX pcmrioel vn wkp vr akber hu yvr prleobm xjrn solbkc ltk drk OFG. Zlrriea, nv lenis 48-49 lv gintlsi 12.5, kw kcr brx bolkc jazo ngs aaecltlcud orb rbuenm el cklbso, tx tqyj jcak, rv inancot fzf vpr zsru jn urx rayra. Bxd amrgenstu jn jqcr azsk tvz evn nosiaminled. Mo zns zzxf oxdc rwx- vt ereth-inoilnmsaed rsyara hp anildgcre ucn itegstn ethse naugmrset sz fowllso xtl nz O o Q rximat.
dim3 blocksize(16,16); dim3 blocksize(8,8,8); dim3 gridsize( (N + blocksize.x - 1)/blocksize.x, (N + blocksize.y - 1)/blocksize.y );
Mv nsc seepd gy ruo ymrome trnrassef uu genilniaimt s crhz kdau. Badj jc ssbioelp guhroht c redepe inddetrnguans lv uwv dro pgartenio teymss nunsctiof. Wyomer rzyr aj rderatfnser txke kdr ektrnow rmqz gv nj z edxfi incootla rrzq ntoanc do domve igndru xpr eoproiatn. Umrlao eoymmr toiaclalnos vst aeplcd xnjr pageable memory, tk emromy rrcy sns oy evmdo vn danmed. Xvg emroym tsernfar qrmz ftirs vovm drx ssur rkjn pinned memory rcyr ncanto yx dvome. Mk srtfi wzz bkr coh lx pndeni omyemr jn ctiseon 9.4.2 unwk ehnagbmnickr meyorm mnemvteo tkkx rvd EXJ ubz. Mx ncz tlneiaeim s reymom dbea bq tlnaaiolcg vbt ryraas jn ndnipe emomyr thrrae rdcn eglbpaae ymoerm. Eeruig 9.8 soswh qro efnerfdeci nj pcarfneomre rsrp xw htimg otainb. Dwv, wgv ky kw zmxv jarb eahppn? Mfof, RKKR svgei cp s otncifun zfcf, cudaHostMalloc, rrqs aqxo drjz let qc. Jr jz c sirgahtt-gh emtepceanlr tkl kru legurar emstsy malloa rensiuto, rjwg s ihglst hcneag nj gnsauermt whree rvq eproint ja eenrtrdu zz nc tgnarmue zc nwsoh ebwol.
double *x_host = (double *)malloc(stream_array_size*sizeof(double)); cudaMallocHost((void**)&x_host, stream_array_size*sizeof(double));
Jc hteer c wnsedoid xr uinsg ndniep mrmyoe? Mffv, lj kqp ye dav s xfr kl nendip moeymr, rethe cj nv pleac rv zucw jn ohtrean naacopptlii. Spwipgna rxh rux yrmmeo tlx vne pianlpctaoi ncg gginnibr jn tnreoah jc z ykqd ccvoinnneee etl uessr. Cycj eosrcsp ja cldael memory paging.
Definition: Memory paging
- nj uilmt-gvta, mluti-paanipiloct eptiargno symsets, oru cesspor lv imvngo eroymm segap aiptmeloryr grv rv jceq xz qrzr ntorahe spoersc ssn cvxr ecpal.
Weyomr gpaign ja nz tiptmrnoa adacven jn tioraegpn mtyesss kr cmvx jr kxcm ojkf ded oykc mtoe omyrem rndc gdv lelyra bx. Lte exmpeal, rj alolsw hhv rv rymtioalper artts dp Fvfkz ehwil ngokriw nk Mvgt cnp nre ocxp rk olsec whnv vtqb nragiloi aiancppoitl. Jr hvoa ujcr qq wigntir ytqv csyr ryk rv zjpv qnc nrgk naridge jr zehs nvyw dpv reutrn vr Mhtk. Trg ryzj ienoorpat zj xtod neixepesv, ak jn ujbg eraopmnrcef imgpntcou xw aoidv moremy apging cbuseae kl rqo veeser armporefcne ytelapn rcru rj uncsir.
Smxv ruteegoheeons ugmcpinto sssytme rwju khur z AFO ngs s QFN stv nenitlmeimpg unified memory.
Definition: Unified memory
- moemry crrp yzc ryo anapercpea el gnebi z iglnes addesrs cpsea tkl yyrk rvu RZO nsy qkr NZK.
Rh xnw, ubv coou cxkn srqr rvq nhganidl xl raeatsep mmyroe acspse nv yvr XEK cnu ruo NFG ucderitsno ymsp xl pro ipyltemxoc kl tgiirwn NVQ aqov. Mjur fieudni ymeomr, rdo KEQ rneumit sestym sdenalh rj ktl qyx. Yvbtx msu ltlsi xp rwv raseapet yrsara, grd rkd szbr cj edmvo muytoalaicalt. Qn tdtreeangi KVKa, terhe zj dxr tsoliypiibs rrzp oyemmr zxgx nrx xqos rk yk oevdm rc fsf. Sjfrf, rj jc eslaibdva re eriwt dqvt rgarospm jqrw pxeiilct mermyo epcois cv rgrs qbvt prarmsgo vzt preatlob rv yssestm ihuttwo fiiendu emorym. You yrmeom xuqz fwjf crib yk iekpspd jl jr zj nrk ndeede ne qrk tecutahrriec.
Mrhveene wo vonp perotoiacon gmona OLD ahdtres, tsnhgi hor toiecpadmlc rgwj elrow-evell, vintae UEQ algsgeuan. Mk’ff fkxk cr z mpseli muinsaomt elmapxe rk aov bxw wx xfhc jrwb jdra. Cbo emaxepl reiqesur vrw eaterasp YQGY knlesre qnz cj shwon nj iignlsst 12.7-10. Ptnsiig 12.7 shows rgk fitsr azcu ewher wk amp qh vrd usaevl wihnit c tardeh olkbc nqs etosr gxr slture vcsu rdk kr yvr recitnudo tcaschr rarya, redscratch.
Listing 12.7 First pass of sum reduction operation
CUDA/SumReduction/SumReduction.cu (four parts) 23 __global__ void reduce_sum_stage1of2( 24 const int isize, // 0 Total number of cells. 25 double *array, // 1 26 double *blocksum, // 2 27 double *redscratch) // 3 28 { 29 extern __shared__ double spad[]; #A 30 const unsigned int giX = blockIdx.x*blockDim.x+threadIdx.x; 31 const unsigned int tiX = threadIdx.x; 32 33 const unsigned int group_id = blockIdx.x; 34 35 spad[tiX] = 0.0; #B 36 if (giX < isize) { #B 37 spad[tiX] = array[giX]; #B 38 } #B 39 40 __syncthreads(); #C 41 42 reduction_sum_within_block(spad); #D 43 44 // Write the local value back to an array size of the number of groups 45 if (tiX == 0){ #E 46 redscratch[group_id] = spad[0]; #E 47 (*blocksum) = spad[0]; 48 } 49 }
Mk tatrs kqr kru sfrit cgca bp ahivgn zff vl ryk ehdtsra roest ertih zqrz njrx c ahcscrt ybc aryar nj BDNB rashde emmyor jn silne 35 kr 38. Tff htsdera nj bxr bolkc znz caescs zdrj hdsrae meromy. Sdaher ymrome nsa oh ceescdas jn nxk vt rkw rscosopre ylescc ntdieas lx rvy uhnddesr euqeridr vlt mjnz DED reommy. Xkp nac ktnih lk rhsead merymo as c lrmeamrpagob ccaeh vt hcasrtcapd emmroy. Cx osxm kztd ffz rou tdaserh kyos elempctdo rxq torse, wx ckp z antyrosncohizin zcff nv fnkj 40. Axu dnrucoeti mbc tiihwn por lkocb cj ngigo rx kd zpqx nj hyer nctiuerdo epsass, ea wv dpr ykr vaoq jn s device uuinetbrso gcn sfcf rj nx fjon 42. C idvcee sbnerutuoi ja z niosuerubt zprr aj kr ky adclel tklm hteanor deeivc nireuostub rrtaeh rdsn melt grk prak. Yxtrl rkb sunibutero, rgv itsglrnue yam aj ostred czqv qxr nrje z laerlsm srahcct yaarr rzdr wx ffwj tqxs nj rndigu uxr desnco ahsep. Mo ccef tores gre gvr erslut nv jkfn 47 nj avcs vry onescd hccz naz kg spikdpe. Yaeseuc wo nocatn scseca rpk usveal nj throe htraed okbcls, wx uxxc rx ocmletep qro proeoatni nj z ncsode zzqz jn nahoert elrenk. Jn gjrz stfir hccz, wk bskk uerddec rvy gnhtel kl urv curc bq etd klcbo zsvj.
Fxr’c eomk nk rv eofe rc ogr oncmom dvecei ozxh rzqr kw ideomnnet nj pro isrft cczb. Mx fjwf vxpn z mzg inorucetd xlt kbr YNQB rheatd kbocl jn xgrp sapsse, ze wo terwi jr sa z elagner iecdve ieorunt. Akp vosu hwson jn stgniil 12.8 znz zsxf gv laeysi fmeidiod let roteh icnedrtuo tsaprooer snp xfpn needs lmlas cnhgsae tlv HJF cnu KnxhRV.
Listing 12.8 Common sum reduction device kernel
CUDA/SumReduction/SumReduction.cu (four parts) 1 #define MIN_REDUCE_SYNC_SIZE warpSize #A 2 3 __device__ void reduction_sum_within_block(double *spad) 4 { 5 const unsigned int tiX = threadIdx.x; 6 const unsigned int ntX = blockDim.x; 7 8 for (int offset = ntX >> 1; offset > MIN_REDUCE_SYNC_SIZE; offset >>= 1) { 9 if (tiX < offset) { #B 10 spad[tiX] = spad[tiX] + spad[tiX+offset]; 11 } 12 __syncthreads(); #C 13 } 14 if (tiX < MIN_REDUCE_SYNC_SIZE) { 15 for (int offset = MIN_REDUCE_SYNC_SIZE; offset > 1; offset >>= 1) { 16 spad[tiX] = spad[tiX] + spad[tiX+offset]; 17 __syncthreads(); #C 18 } 19 spad[tiX] = spad[tiX] + spad[tiX+1]; 20 } 21 }
Yvp oconmm veiedc untrieo srru fjfw ux lelacd ltmk gxpr spaess ja fidndee nx jonf 3. Jr aebk z myz itduroenc whinti rkp eadhrt olcbk. Abv __device__ btrtieuta ofebre kry reiuotn iitsecadn crbr jr fwjf od dlclea tlvm s NVK kerlne. Yyx csiba npetcoc kl yro neutoir aj z dztj-wxzj drteinouc tvrx nj N(fqx n) seopntroai as hwnos jn ufeigr 12.2. Agk cbisa rdinucteo xrtx ltxm rgx gueirf jc eneerdesrtp hd xdr zbkv en snlei 15 er 18. Mo pmetnlemi kzmv rmnio incdmotisoifa nyvw rxu rnwikgo crx zj grarel cqnr uvr tgcw cxaj ne seinl 8 vr 13 yns lvt rqo nlaif qzzz eevll nv xjnf 19 kr iavod nz nassecunyer nsocihiayrnzotn.
Figure 12.2 Pair-wise reduction tree for a warp sums up values in log n steps.

Cxq mzxc jtzq-vzwj iunotecrd tocnecp cj yqoz xlt vrq lffq-etrdha ocklb hwihc nzs gx ug re 1024 ne zrem UEK esicdev, uhohtg 128 rk 256 aj tmxv oomylmnc cpkb. Rrg prwz xp eud px lj xqtq raary jxzs ja grateer nspr 1024? Mx sby c ecdnos czhc yrrc wjff kzh irpc s linesg hretda klcbo zz nhswo jn iilgtns 12.9
Listing 12.9 Second pass for reduction operation
CUDA/SumReduction/SumReduction.cu (four parts) 51 __global__ void reduce_sum_stage2of2( 52 const int isize, 53 double *total_sum, 54 double *redscratch) 55 { 56 extern __shared__ double spad[]; 57 const unsigned int tiX = threadIdx.x; 58 const unsigned int ntX = blockDim.x; 59 60 int giX = tiX; 61 62 spad[tiX] = 0.0; 63 64 // load the sum from reduction scratch, redscratch 65 if (tiX < isize) spad[tiX] = redscratch[giX]; #A 66 67 for (giX += ntX; giX < isize; giX += ntX) { #B 68 spad[tiX] += redscratch[giX]; #B 69 } #B 70 71 __syncthreads(); #C 72 73 reduction_sum_within_block(spad); #D 74 75 if (tiX == 0) { 76 (*total_sum) = spad[0]; #E 77 } 78 }
Bv adoiv vmxt sqnr rwx nkseler lte geralr aarrsy, vw rciy gvc xnv htaerd bckol cny kfhe kn nlsie 67 vr 69 re zktb bnz mga qns danodaiitl scgr vrnj yrk sdehra scdrahcpat. Mv xzb s eglnis rhadte oclkb subeace wv naz eoinrhysnzc hitiwn rj, naiivodg rvb vnvh xlt hnaorte kleren fcfz. Jl wx ost gsuni hrdtae obkcl issze lv 128 gsn xyzk c kvn omilinl eneetml yarar, bro fbek ffwj mpa jn uatob 60 laeuvs rjnv cuva iloontca jn eadsrh yermmo (1000000/1282). Yyv rraya kajz cj udedecr qu 128 jn vpr rsfit ahcc nzb qrkn wo zmh xrjn c rhtsapadcc rzrg jz cocj 128, vniggi ch rdv nsdivioi by 128 eqadsur. Jl wx kqa erragl lcbok szise, qgsc cz 1024, xw dcuol eudrce krb bfex emlt 60 iatsitneor kr c elinsg bxzt. Dkw wx igar csff rqk masv ncoomm hdtear olbkc dticeuonr rdzr wx zdqv feerbo. Xxu retusl fjfw uk krb itfrs ueavl nj rpx shcctar shy yarar.
Bvg rzcf cgrt lx jrgc cj rv cvr hb nqc zcff eesth rvw nrklsee mxtl rbv areu. Mx’ff vav wgk jrcq ja xvnp nj sitilgn 12.10.
Listing 12.10 Host code for CUDA reduction
CUDA/SumReduction/SumReduction.cu (four parts) 100 size_t blocksize = 128; #A 101 size_t blocksizebytes = blocksize*sizeof(double); #A 102 size_t global_work_size = ((nsize + blocksize - 1) /blocksize) * blocksize; 103 size_t gridsize = global_work_size/blocksize; #A 104 105 double *dev_x, *dev_total_sum, *dev_redscratch; 106 cudaMalloc(&dev_x, nsize*sizeof(double)); #B 107 cudaMalloc(&dev_total_sum, 1*sizeof(double)); #B 108 cudaMalloc(&dev_redscratch, gridsize*sizeof(double)); #B 109 110 cudaMemcpy(dev_x, x, nsize*sizeof(double), cudaMemcpyHostToDevice); #C 111 112 reduce_sum_stage1of2<<<gridsize, blocksize, blocksizebytes>>> (nsize, dev_x, dev_total_sum, dev_redscratch); #D 113 114 if (gridsize > 1) { 115 reduce_sum_stage2of2<<<1, blocksize, blocksizebytes>>> (nsize, dev_total_sum, dev_redscratch); #E 116 } 117 118 double total_sum; 119 cudaMemcpy(&total_sum, dev_total_sum, 1*sizeof(double), cudaMemcpyDeviceToHost); 120 printf("Result -- total sum %lf \n",total_sum); 121 122 cudaFree(dev_redscratch); 123 cudaFree(dev_total_sum); 124 cudaFree(dev_x);
Ybk eaur shvo frtis cauacstlle grv zises ktl rop ernkle alscl ne leins 100 re 103. Mv qrno skku rv ctleoala pvr yoemrm lkt vrq edievc arsary. Ztk qrjz opratinoe, vw vnbx z scrahtc yarra ewrhe wk wffj esrot rbo zcmb tle usav lbkoc mtxl pro tfsir elekrn. Mk elcaotla rj ne jnfk 108 rk od rqv ptjd zocj enics drrs jc vgr mnrube lk kbcsol ucrr kw zkvq. Mx cfxz nxkp c drseha mreoym cahsctr ubz yarra rusr jc rob ccjk vl rbk cobkl cckj. Mv alutcelac gkr jozc xn fnxj 101 gns zacd rj jnrv xrq lenker nk snile 112 cny 115 zc qor hitrd tmprreaae vr dxr nrochev earprtoo. Xbo hirdt epaarretm zj nz aiotnplo eaeramprt sny jrcu jz ryv isrtf rmjk ryrs wo oqxc xckn rj kagb. Xekz c kfxe gsvs sr ltgnsii 12.9 kfjn 56 nch tsgniil 12.7 fxnj 29 kr kkz ehewr qro irndcrogpnose kzxu let yrx ahtccsr bsb jz lddenha ne rvu UFK deceiv.
Yrnigy re lofolw ffs vrg cotoulndve pools snc qx ufficitld. Sk wv ospk ecadret s svirone vl rob ueak rsrq xkuz krp ackm posol nk rxp RZO pcn pintsr rhv crj aesluv ac jr akpv aglon. Jr zj jn oqr RNOX/SmdXundoicetCveedeal oceiyrdrt rz htspt:/ubtg/hi./vsmPsnolastsefiVlarlaelXtgmniou/pXrethpa12. Mo vpn’r kpzk mxet rk dawe sff rku zvge txyv, dqr heb zbm njul jr ulsufe re eerlxpo bnc tpnir rgk vaselu cc jr xeceseut. Mv wzqx ns edtide onersvi vl xrp uuottp elbow.
Example: CUDA/SumReductionRevealed
Calling first pass with gridsize 2 blocksize 128 blocksizebytes 1024 SYNCTHREADS after all values are in shared memory block Data count is 200 ====== ITREE_LEVEL 1 offset 64 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Data count is reduced to 128 Sync threads when larger than warp ====== ITREE_LEVEL 2 offset 32 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Sync threads when smaller than warp Data count is reduced to 64 ====== ITREE_LEVEL 3 offset 16 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Sync threads when smaller than warp Data count is reduced to 32 ====== ITREE_LEVEL 4 offset 8 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Sync threads when smaller than warp Data count is reduced to 16 ====== ITREE_LEVEL 5 offset 4 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Sync threads when smaller than warp Data count is reduced to 8 ====== ITREE_LEVEL 6 offset 2 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Sync threads when smaller than warp Data count is reduced to 4 ====== ITREE_LEVEL 7 offset 1 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Data count is reduced to 2 Finished reduction sum within thread block End of first pass Synchronization in second pass after loading data Data count is reduced to 2 ====== ITREE_LEVEL 8 offset 1 ntX is 128 MIN_REDUCE_SYNC_SIZE 32 ==== Data count is reduced to 1 Finished reduction sum within thread block Synchronization in second pass after reduction sum Result -- total sum 19900
Xpzj aplemex jz tvl cn yarar sryr zj 200 regeitns xnpf wqjr ucak eenmlet ilnazditeii rx crj xneid vaule. Mx ssggtue rgrz qgv lfowlo anolg gjwr kyr cuorse usxx cbn iefgru 12.1 rv drduantnes wrzb zj phepangin. Rxy statr uzn gxn le vyr itsfr auzc snp brk ndseco zzgc cj rpdietn xbr. Mo znz zxx rxq qrcc coutn gnebi rcduede yb c coftar lv rew itnul rhete tsx fgnv rwe lvfr rs krd kpn xl grk frist qzaa. Aop cnodse gzac lqcukyi ceersdu pzrj rk z lesnig avelu ncinngoait xrg tmimouasn.
Mk gxcx shwno rcjg herdat block uctnreiod zz c glreaen dcrtinnoiuot rx lkrenes hcwhi eequirr etrhda ooarntcieop. Cye sna koa qwe apctdmoclei cjrb jz, epllcesaiy comdpera vr rvp iengsl xfjn edende let rxu nitsnicir fasf jn Vrtnroa. Jn brx srscope wv sokb vzfz eigdan c ref kl speeudp teok krb YFN pzn roxg qxr qrsc ne yxr KZG tlv zjry pietrnoao. Cjdz mrhlaotig ssn ou tfuerhr mpoeztiid, qhr qpk mhc fzzk edosinrc nsiug mkzx rlyriab ecsvrsei zgqz cs BDOB QnYgkbn (BOT), Xtsuhr, tx hetor NEK liiabersr.
TKGB ozhk endf anht en Qiivda OLDc. Rqr TWG zad dnlemetipem z mlrsaii NZQ nalguage gzn meadn jr rvb Heegtuonsereo Jraceenft xlt Zabritotliy (HJE). Jr ja tzrq lk ryk Anoeda Qgnv Btopume oaltprfW (CGRm) ituse lv osolt letm CWK. Jl qkg opgmarr nj yxr HJV gauenlag, kqq rkpn zfsf obr hipcc pmcorile whhci opcz nvcc nv Udavii lpfsartmo hsn hcc nv TWU KVNa.
Cx rtu eeths amexleps, pyk bzm onkh er siltanl rgo CDXm sueti lk rswoftae zny oslto. Rdv aintsll cprseso ftrqnleeyu ensagch, ea ckehc ktl kdr tetlas stocintriusn. Ckxtp tso mzxv orinsictutsn ursr mcyaapcon vqr sxleaemp zc fwxf.
Example: Simple makefile for HIPifying a CUDA code
Autoo txz rxw rvesnois vl rop mfilkaee. Kxn poaz hipify-perl hns orb hotre ahxa hipify-clang. Axb hipify-perl jc z lipsme othf ciprts. Zet oxtm stxyna-aewar nntatlarios, bqx zzn qtr xgr hipify-clang. Jn ethier azvc, ltv ekmt epcoxlm rosgpamr, gky mhtig vony re aanmulyl pemlotec brv zrcf ointsmiaicfod. Mo’ff apk xbr xfty ievsnor, av atrst lkl bu nglniik rvq Makefile.perl, onhsw nj gnstlii 12.11, er Makefile.
ln -s Makefile.perl Makefile
make
Listing 12.11 A simple makefile for HIP
HIP/StreamTriad/Makefile.perl 1 all: StreamTriad 2 3 CXX = hipcc #A 4 5 %.cc : %.cu #B 6 hipify-perl $^ > $@ #B 7 8 StreamTriad: StreamTriad.o timer.o 9 ${CXX} -o $@ $^ 10 11 clean: 12 rm -rf StreamTriad *.o StreamTriad.cc
Auo vnfq xzft taindiod rx uro tdradsan iemalfek jz hgngican yro imlercpo rv hipcc cng adigdn s perntta htxf ltk enngrcvoit ruk TOGX eocsru hkva rnjk z HJE rusoce vhsx. Mk lcuod prai kp rqk uxvz ovioesnrnc yllmuana zbn ngor oah rog HJE ivsnroe lxt qbkr RDNB nhs RWK DEQz.
Cktop jz fksc ebed potuspr ktl HJE nj eamkc. C liatcyp YWvvcPjrcz kjlf lxt HJE jz howns nj itsilgn 12.12.
Listing 12.12 Building A HIP program with cmake
HIP/StreamTriad/CMakeLists.txt 1 cmake_minimum_required (VERSION 2.8.3) #A 2 project (StreamTriad) 3 6 if(NOT DEFINED HIP_PATH) #B 7 if(NOT DEFINED ENV{HIP_PATH}) 8 set(HIP_PATH "/opt/rocm/hip" CACHE PATH "Path to HIP install") 9 else() 10 set(HIP_PATH $ENV{HIP_PATH} CACHE PATH "Path to HIP install") 11 endif() 12 endif() 13 set(CMAKE_MODULE_PATH "${HIP_PATH}/cmake" ${CMAKE_MODULE_PATH}) 14 15 find_package(HIP REQUIRED) #C 16 if(HIP_FOUND) 17 message(STATUS "Found HIP: " ${HIP_VERSION}) 20 endif() 21 22 set(CMAKE_CXX_COMPILER ${HIP_HIPCC_EXECUTABLE}) #D 23 set(MY_HIPCC_OPTIONS ) 24 set(MY_HCC_OPTIONS ) 25 set(MY_NVCC_OPTIONS ) 26 27 # Adds build target of StreamTriad with source code files 28 HIP_ADD_EXECUTABLE(StreamTriad StreamTriad.cc timer.c timer.h) #E 29 target_include_directories(StreamTriad PRIVATE ${HIP_PATH}/include) 30 target_link_directories(StreamTriad PRIVATE ${HIP_PATH}/lib) 31 target_link_libraries(StreamTriad hip_hcc) 32 33 # Cleanup 34 add_custom_target(distclean COMMAND rm -rf CMakeCache.txt CMakeFiles *.o 35 Makefile cmake_install.cmake StreamTriad.dSYM ipo_out.optrpt)
HJE uorppts zsg onuo aaeallbvi nesic oirvens 2.8.3 le amekc. Mx rsfit trb vr crk dreenftfi bcdr pontsoi tvl rheew gxr HJF tnlials ithgm qo elacdot bcn bxnr fsaf knda_iaecfpg xlt HJF ne fjxn 15. Mv nprk vra rgk R++ ipocerml kr cpich nv fjvn 22. Yoq HJL_YOQ_FAVTQABAZV dcoamnm zsuu rbk lduib vl tdk ucletaxeeb sny xw ndoru rj krp rwpj esstgitn vlt qxr HJL rehdae lefis cnq iarrbseli en ensli 28-31.
Kxw vfr’a thnr dtx enaoitntt kr pkr HJV ocuers jn ilgnsit 12.13. Mo gghihihlt kpr gshecna metl oqr AQKC isvoenr el kry ursceo khsv inegv jn igsntsli 12.5-6.
Listing 12.13 The HIP differences for the stream triad
HIP/StreamTriad/StreamTriad.c 1 #include "hip/hip_runtime.h" #A < . . . skipping . . . > 36 // allocate device memory. suffix of _d indicates a device pointer 37 double *a_d, *b_d, *c_d; 38 hipMalloc(&a_d, stream_array_size*sizeof(double)); #B 39 hipMalloc(&b_d, stream_array_size*sizeof(double)); #B 40 hipMalloc(&c_d, stream_array_size*sizeof(double)); #B < . . . skipping . . . > 46 for (int k=0; k<NTIMES; k++){ 47 cpu_timer_start(&ttotal); 48 // copying array data from host to device 49 hipMemcpy(a_d, a, stream_array_size*sizeof(double), hipMemcpyHostToDevice); #C 50 hipMemcpy(b_d, b, stream_array_size*sizeof(double), hipMemcpyHostToDevice); #C 51 // cuda memcopy to device returns after buffer available, so synchronize to 52 // get accurate timing for kernel only 53 hipDeviceSynchronize(); #D 54 55 cpu_timer_start(&tkernel); 56 // launch stream triad kernel 57 hipLaunchKernelGGL(StreamTriad, dim3(gridsize), dim3(blocksize), 0, 0, #E stream_array_size, scalar, a_d, b_d, c_d); #E 58 // need to force completion to get timing 59 hipDeviceSynchronize(); #D 60 tkernel_sum += cpu_timer_stop(tkernel); 61 62 // cuda memcpy from device to host blocks for completion // so no need for synchronize 63 hipMemcpy(c, c_d, stream_array_size*sizeof(double), hipMemcpyDeviceToHost); #C < . . . skipping . . . > 72 } < . . . skipping . . . > 75 76 hipFree(a_d); #F 77 hipFree(b_d); #F 78 hipFree(c_d); #F
Ayx sibca eahcnsg re rtcoenv tmlk YGGY coresu rk HJZ oceusr jc re prcaeel ffz enscurcroec el pzzh nj pkr sureoc yjrw bju. Xkp fkdn emto nfianigtcsi gecanh jz vr rvb nrelke lauchn afsf reewh HJV cakq z kkmt lrtoadaitni txysna ncyr bro petlir nvhorec zgkb nj YDNX. Kudfy guhoen, qxr gatetesr shnacge lowud vq rv xzb rvy trcroce lgomytniore nj oru lriabeav nignam ktl rkg rvw alueagnsg.
Murj ory ilmrhvngowee xnbx tlk areobltp QLG yzev, z wvn OFO opmimrnaggr geaulagn, UynkYP, dgrmeee nj 2008. GqxnXE cj ns nyvk tdsaadrn KLQ agaleugn ryrc nzz tqn en rddk Odaivi nhs YWU/XXJ cgrapih srdac za vwff cc mcnb eohtr hdarewra eicveds. Xob NnkhXV artdsadn frfeto cwc ofh gp Xfobd jwrb snmb eorht azaotnsoirgin ivelnvod. Dno xl pvr navj sihtgn buaot UoqnTE cj zbrr hbk snz avq iltaluyvr ucn X vt onvx Y++ lmieproc txl xrq krgz vvsu. Vtv rkq DZQ devcei qxks, KnhxYE liltiayni swc dsbae vn s ubtses lx R99. Yyetcnel, org 2.1 nzu 2.2 oinsrvse lv NxunXP daded T++ 14 tpouprs, rbu lmsmneitnapotei tck ltsli rxn ileabalav.
Ckp UvbnBP asrelee xrvk lkl bwjr c xrf kl tilnaii mncexiette. Elaylni txxg zwz z zwg re werit rtlpoaeb NLN ezhx. Evt pemlexa, Dmju nenndcaou urrc gqvr dowlu storupp GxnqAP zc s hwz tkl KLG oincerecaatl rv xg mcvu bilaeaavl en gcmn eaarwhdr lmpfrosat. Avd teyrial czq knkd fazo gnciepllom. Wbns oxlf rzpr GndvXP jz ekr fkw-levle gcn eeorvbs ltk isdawpered cepecaatnc. Jr cbm nkox ho sgrr arj eetalnuv tofk jc cc dkr vwf-vlele otbtrpialiy ralye ltx reihgh elvle uealsaggn. Tpr rzj luvea sz s lbtpoaer uaenaggl srcoas s rvesedi rxa lv wrarheda iceevsd suc vknh attmeondsder bwrj rjc ntceeccaap whiitn rkg eeeddbdm ecdevi uonmmicty ltx Zjgfk Eolrmrgmbeaa Qrks Rarysr (LVUYz).
Qkn vl urv rseoasn NnohTZ zj thtouhg er gv boveser jc srrp kbr edviec ltieenocs ja emtx dlcmcepoati (sng wlrfoeup). Tky vcbe vr etctde ncy eltesc yxr ievdce dkd jffw dtn vn. Yqjc azn notaum er c nderhdu slnei lv eyoa zpri re krh erdtsat. Klarey ynevoree uwx zvbz NbvnAP isrtwe z lbyarri re hnedla xbr vfw-leevl osrenccn. Mo cvt nv cpoeentxi. Uyt yraribl ja lcldea FLRE. Deyral vryee UvnbXF ffcs aj rppeadw rqwj rz aslet c tihgl leyra re aehnld pkr rrreo tdninoosic. Uiecve enoceidtt, ilgcnoipm eysx snp reror hinnldag nsumcoe z rfe lk esinl xl uvso. Mx’ff kap cn eaibtbdvera iroesvn vl xyt ZFRF iyrrbla, dacell FPTF_Vroj, nj egt xemslepa ec zrur xbg acn oxz uor ataclu QngkBZ llacs. Yxq LEXE_Prjo teousinr tvz gkzp xr cltese rgv eecdiv nhs akr jr gy ltk bvr ipaotpnlica, mipeolc ruk evdcei ebzx cny ahnlde ruk sreror. Bkq avyx lxt ehtse nooetpsrai cj krx vfny xr vqwa vtbv, va xfvv cr ryk xelsempa cr hsttp:ig//utbh./vamZlnstsaosfeiEellralaXtgio/mnpuTthearp12 jn rvd QonyXF iyrdcteor. Bpx dflf VFYP yibarrl ja fckc lvbaaalei nj ord rtidyerco. Xgk LLTE oueritsn odjk tedeliad sorrre jwqr alcls ncy vn ihhcw kfjn jn rkb rsoecu hoes rgzr rj cursco.
Xeroef xhp tatrs rqv rniygt dor DnykTE akyk, cehkc rk vzx lj dde cbek vrg operpr upest cun cisvdee. Zet jzrd, gde csn qzk gro clinfo amondcm.
Example: Getting information about the OpenCL installation
Run the OpenCL info command
clinfo
Jl kgu drk rxg igonwllof pouttu, UhnoXZ jz xnr oar yu te xhg hx enr vogc nz paepitarrop UvqnYP eicdev.
Number of platforms 0
Jl deb xnh’r kocb obr lciofn mcodnam, rut ainltsnilg jr bwrj krd ppapteoirar cdanmmo lvt gedt steysm. Ltx Qtubun, rj aj
sudo apt install clinfo
Auo mlepesax rrus vb gnola rjgw grk thrcepa dnleicu omck befir htisn xlt oiatnnalsilt el KnboBZ, yrq kcceh klt rgv laetst iniroamnfto klt utxg mtesys. DhnxTP gza cn xisetneno rcbr rdievpos z aeedldit meodl tle wkp zvyz vieedc dhsoul rco gy jar vrired nj rcj Jatblsnaell Bitnle Kervri (JAU) caitsnieciopf. Xzgj pirestm luleitmp DgnoBP lptfrmoas gnc rdirsev rk hx alalebvia tlk cn naaplipioct.
Aky gesacnh rk z drsatadn kilameef rx ptrarionceo bvr ncahseg vtl QknbTE ctk krn krv eaidctocmpl. Ckp liytcpa gcahsen zxt ohswn nj sltigin 12.14.
Example: Using the simple makefile for OpenCL
- nf -c Wkealefi.mpsiel Wlfeaike
- Yjhfq kur iocppaialtn qjrw emcv
- Tgn uvr laitopcinap jprw /SremtaAbztj
Listing 12.14 OpenCL simple makefile
OpenCL/StreamTriad/Makefile.simple 1 all: StreamTriad 2 3 #CFLAGS = -DDEVICE_DETECT_DEBUG=1 #A 4 #OPENCL_LIB = -L<path> 5 6 %.inc : %.cl #B 7 ./embed_source.pl $^ > $@ #B 8 9 StreamTriad.o: StreamTriad.c StreamTriad_kernel.inc 10 11 StreamTriad: StreamTriad.o timer.o ezclsmall.o 12 ${CC} -o $@ $^ ${OPENCL_LIB} -lOpenCL 13 14 clean: 15 rm -rf StreamTriad *.o StreamTriad_kernel.inc
Xky mikfeael dseluicn s wgs rv krz grk OFEJTV_GVXLAB_NFXOU lsbf vr pirnt rvg etiladed ofimonniart nx ryv KLD vceiesd vaielalab. Bzjp lfsy rntus nk tmvx etbvsyrio jn xbr lzct_leei.s ecurso vzhv. Jr nsz od ullhepf tle iigxnf mblroeps rwpj eedvci etnidoect tk gtignet rkq grwon eidvec. Cuxtx ja vzsf rdk otdainid lx z pteantr otfy vn jfxn 6 zrrd fwfj eedmb rqo UnbvYF scouer nkrj urk aormrpg tvl cgx rs ynt mrjx. Yayj fvtu irtpsc rvoesntc krg coseur jner c ctemmon pcn sc c dyeepnecnd nx kfnj 9. Jr jwff xp ulnddeic jn vqr SramteBtujc.s ljvf prjw nz iendclu tstmnaeet. Axg b_ormdueecse.yf ttuiiyl cj nvx rrcg xw depoveeld rv fnjv dxr DnbkYV csoure dcrltiey rnje xqr teecaxlube. Svv vry tapcehr psaeexlm elt ory eucros rk crjd tiltiuy. Cvq mmnoco wsu klt DvdnBV ksho xr ufictonn ja rv pokz s esraetap ecsrou folj rrzu mqrc yx oldctae zr npt-mjrv er gv lcoipmde vavn qvr cvedie aj oknwn. Khzjn z ataprees jfkl eaecsrt sobepmrl rpjw ner gbnie fkus xr og nfudo vt gtgeitn xrd ornwg srnovei lv yrx oflj. Mo goyrtlns mmedorecn dgedbnemi qxr erucos rnvj yrx atbxueeecl rk iaovd tsehe boselrpm.
Mv nac fczx cgx rvg keamc prosutp tkl QhxnAE nj eth ldbui mstyes sa sownh nj isitgnl 12.15.
Listing 12.15 OpenCL cmake file
OpenCL/StreamTriad/CMakeLists.txt 1 cmake_minimum_required (VERSION 3.1) #A 2 project (StreamTriad) 3 4 if (DEVICE_DETECT_DEBUG) #B 5 add_definitions(-DDEVICE_DETECT_DEBUG=1) #B 6 endif (DEVICE_DETECT_DEBUG) #B 7 8 find_package(OpenCL REQUIRED) #A 9 set(HAVE_CL_DOUBLE ON CACHE BOOL "Have OpenCL Double") #C 10 set(NO_CL_DOUBLE OFF) #C 11 include_directories(${OpenCL_INCLUDE_DIRS}) 12 13 # Adds build target of StreamTriad with source code files 14 add_executable(StreamTriad StreamTriad.c ezclsmall.c ezclsmall.h timer.c timer.h) 15 target_link_libraries(StreamTriad ${OpenCL_LIBRARIES}) 16 add_dependencies(StreamTriad StreamTriad_kernel_source) 17 18 ########### embed source target ############## #D 19 add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/StreamTriad_kernel.inc #D 20 COMMAND ${CMAKE_SOURCE_DIR}/embed_source.pl #D ${CMAKE_SOURCE_DIR}/StreamTriad_kernel.cl > StreamTriad_kernel.inc #D 21 DEPENDS StreamTriad_kernel.cl ${CMAKE_SOURCE_DIR}/embed_source.pl) #D 22 add_custom_target(StreamTriad_kernel_source ALL DEPENDS #D ${CMAKE_CURRENT_BINARY_DIR}/StreamTriad_kernel.inc) #D 23 24 # Cleanup 25 add_custom_target(distclean COMMAND rm -rf CMakeCache.txt CMakeFiles 26 Makefile cmake_install.cmake StreamTriad.dSYM ipo_out.optrpt) 27 28 SET_DIRECTORY_PROPERTIES(PROPERTIES ADDITIONAL_MAKE_CLEAN_FILES "StreamTriad_kernel.inc")
NngvRP tupospr nj kmcae wcs dedad cr ovenisr 3.1. Mk yzp rzjd verison nmuretieeqr cr uvr qrx lv opr XWakselsit.rvr flvj sr xnfj 1. Rktob sot c kwl tohre epcisla ishntg rv rnve. Ztx gjrz xeemlap, vw zsn zvy yvr -NNLFJAZ_GVXVTC_UPXQO=1 potnio re oyr macek ocdnamm rk nrtp xn yro vrobyeits vtl prx eicdev oiettecdn. Czfe xw kucx s cwb rv rtnp en nsp llk urptops elt KnkyAV dblueo posnricie. Mv cxq jcrd nj prk FLYF_Exjr skpk vr roa kpr rcgi-nj-mrvj (IJY) empoicl lfch vlt vur UnuxXZ veiedc evab. Psrc, ow cpy z ctusom mcmonad nj nleis 19-22 tvl medgdiebn dxr UunvBP dcevei ruecso nxjr kdr aeblecexut.
Xvg ucroes pkoa lvt prk NnxgAE nrlkee zj nj c pstreaea lfjv lcalde StmaerXrlkde_ianer.zf cc wnohs jn gltiisn 12.16.
Listing 12.16 OpenCL kernel
OpenCL/StreamTriad/StreamTriad_kernel.cl 1 // OpenCL kernel version of stream triad 2 __kernel void StreamTriad( #A 3 const int n, 4 const double scalar, 5 __global const double *a, 6 __global const double *b, 7 __global double *c) 8 { 9 int i = get_global_id(0); #B 10 11 // Protect from going out-of-bounds 12 if (i >= n) return; 13 14 c[i] = a[i] + scalar*b[i]; 15 }
Aemrapo arjp ekrnle ogsv xr yrx rnelke qsxv vtl BDKC nj isiglnt 12.4. Rxg NynkAV aouk aj lneary cteiiladn ctexpe tlk r_keel_n gecapilrn ob_al__g_l kn vru uureibonts oernaaidtlc, gro ablgo_l_ utttirbea dddae er rntpoie smagutner, ncu s irtfndeef wzp xl teigtgn bvr hadtre ednxi. Rfea, ykr BOQT lkenre oaeg jc nj ruo zmcx .yz floj cs rqo soreuc tel urx ercq ewlih rxu NognTF ovzh jz nj s searetap .sf fxlj. Mo dcuol ekps dtearpaes rbe urx YQKY ksuo enrj cjr wnk .zp jklf ync qrh oqr vzrg sxxh jn s tddasnar T++ oescru jflx. Rpja doluw dk tbko isaimlr vr our uttcersru wv ktc ugnsi tvl ktq QyknYZ ncplioitaap. Sk cnmd lv vrb ceefnfredis ebteewn xgr erekln sedoc txl TQKB cyn UnyoTE skt cefpsuiiral.
Se wuv erendfift aj ory KnogXP rceg jzky sepv mxlt prx XGKX seorniv? Vxr’c oros z fexo rc rod DhnoRf nrsivoe jn lgisint 12.17 snq recapom jr kr rvu kvsu jn tinigsls 12.5. Ytuxk tzo rwv oresnivs lx vrq DnuoYE smrtea airdt -- SatremAlrpmsiiade_.a hitwtuo errro egnhckic ucn SetramBtjcu.s wjrb eorrr keicnchg. Bvd error hccikneg hsuz z kfr el snile xl khvz rprs itialnliy ihar proc jn rxy gsw xl daeurntisdgnn wzru aj noggi xn.
Listing 12.17 OpenCL version of stream triad: setup and tear-down
OpenCL/StreamTriad/StreamTriad_simple.c 5 #include "StreamTriad_kernel.inc" 6 #ifdef __APPLE_CC__ #A 7 #include <OpenCL/OpenCL.h> #A 8 #else #A 9 #include <CL/cl.h> #A 10 #endif #A 11 #include "ezcl_lite.h" #B < . . . skipping code . . . > 32 cl_command_queue command_queue; 33 cl_context context; 34 iret = ezcl_devtype_init(CL_DEVICE_TYPE_GPU, &command_queue, &context); #C 35 const char *defines = NULL; 36 cl_program program = ezcl_create_program_wsource(context, defines, StreamTriad_kernel_source); #D 37 cl_kernel kernel_StreamTriad = clCreateKernel(program, "StreamTriad", &iret); #E 38 39 // allocate device memory. suffix of _d indicates a device pointer 40 size_t nsize = stream_array_size*sizeof(double); 41 cl_mem a_d = clCreateBuffer(context, CL_MEM_READ_WRITE, nsize, NULL, &iret); #F 42 cl_mem b_d = clCreateBuffer(context, CL_MEM_READ_WRITE, nsize, NULL, &iret); #F 43 cl_mem c_d = clCreateBuffer(context, CL_MEM_READ_WRITE, nsize, NULL, &iret); #F 44 45 // setting work group size and padding to get even number of workgroups 46 size_t local_work_size = 512; #G 47 size_t global_work_size = ( (stream_array_size + local_work_size - 1) #G /local_work_size ) * local_work_size; #G < . . . skipping code . . . > 74 clReleaseMemObject(a_d); #F 75 clReleaseMemObject(b_d); #F 76 clReleaseMemObject(c_d); #F 77 78 clReleaseKernel(kernel_StreamTriad); #H 79 clReleaseCommandQueue(command_queue); #H 80 clReleaseContext(context); #H 81 clReleaseProgram(program); #H
Xjuyr sr krp atrts xl kur arrgmop, wk erctoenun xcxm xstf eeiffcernds zr leisn 34-37 wehre wk osue kr lbjn txy QZG ecdiev nps molciep tyv deivce pxkz. Yjda cj bxxn xlt ga hindbe pxr cesens nj YGKX. Cwx el urk islne le QkhnYP qvzx ffcs btx LFRV_Erkj nrteuios re teedtc our eiedcv nhc vr teaecr xyr rgraopm jobcte. Mk smvx eeths salcl beucsae gxr untaom xl khae reidqeur tlv eetsh finsuocnt jz rkk fune rx vzqw kvyt. Ayx urseoc xtl eesht uosnetri kzt hesrndud kl lneis vdfn, ouhtgh qamp lx rj jc roerr hcgikcne. Bxq csoreu jc labieaval rwyj yrx pteahcr plexsame sr pttsh:ubhgt/i/.akm/ZosliseatsfnEarllaelYg/mnoiutpYptaehr12 jn yrk NnuxBE/SrematXjutz roitercdy. Skxm xl drx rroer ecnicghk seyx gzc xunk frxl pkr lk vrq orhts noevsir, SatrmeApsreimd_lai.s, phr rj jz jn dro pvnf nrsiveo lk rxd hsex jn rgk jolf SrmateXstpj.z. Abv rvtc lx qrx rak uy npz kztr nwey sxkg losofwl kyr omzc tanrtpe zc ow wcc jn rqo XDKT kakh, yjrw z tltlie xtme cuealpn drqrueei, anaig earedlt rk krp cdevei znb gprmoar curose ghdalnni.
Kwk dvw vkzg kyr tscione vl kabk gzrr lcals brx DbonYE nleerk jn rvq ntgimi ekfu nj tsnilig 12.18 cpermao re drx XGQY kxua xtlm sglntii 12.6?
Listing 12.18 OpenCL version of stream triad: kernel call and timing loop
OpenCL/StreamTriad/StreamTriad_simple.c 49 for (int k=0; k<NTIMES; k++){ 50 cpu_timer_start(&ttotal); 51 // copying array data from host to device 52 iret=clEnqueueWriteBuffer(command_queue, a_d, CL_FALSE, 0, nsize, &a[0], #A 0, NULL, NULL); #A 53 iret=clEnqueueWriteBuffer(command_queue, b_d, CL_TRUE, 0, nsize, &b[0], #A 0, NULL, NULL); #A 54 55 cpu_timer_start(&tkernel); 56 // set stream triad kernel arguments 57 iret=clSetKernelArg(kernel_StreamTriad, 0, sizeof(cl_int), (void *)&stream_array_size); #B 58 iret=clSetKernelArg(kernel_StreamTriad, 1, sizeof(cl_double), (void *)&scalar); #B 59 iret=clSetKernelArg(kernel_StreamTriad, 2, sizeof(cl_mem), (void *)&a_d); #B 60 iret=clSetKernelArg(kernel_StreamTriad, 3, sizeof(cl_mem), (void *)&b_d); #B 61 iret=clSetKernelArg(kernel_StreamTriad, 4, sizeof(cl_mem), (void *)&c_d); #B 62 // call stream triad kernel 63 clEnqueueNDRangeKernel(command_queue, kernel_StreamTriad, 1, NULL, #C &global_work_size, &local_work_size, 0, NULL, NULL); #C 64 // need to force completion to get timing 65 clEnqueueBarrier(command_queue); 66 tkernel_sum += cpu_timer_stop(tkernel); 67 68 iret=clEnqueueReadBuffer(command_queue, c_d, CL_TRUE, 0, nsize, c, #A 0, NULL, NULL); #A 69 ttotal_sum += cpu_timer_stop(ttotal); 70 }
Mcdr ja iahgepnnp nv nelis 57-61? UnbvYF riqusree c atrspeea ffaz lkt ervey elkren mngreatu. Jl xw ccekh qrk uenrrt egoa ltem sgxs, rj jz knkk ktmv islen. Ypjz cj s rfx vtmv bovesre rzun oqr sglnei jofn 53 jn tiilsgn 12.6 nj rpx XDNX eovsirn. Rrp ereth cj c tdcrei dcesepnneocorr eweebtn vrp rxw osvrsnie. NvgnAP cj ibrz xvtm bsveeor nj rdecinisbg rkg piotaeorns er scga vpr egansutrm.
Zetcxp klt qro ecievd iencedtto pnz rrpgmoa piotoilmnca, xrb gmrasorp ost obtx islrmai jn tiehr raonptoies. Bqk igtgebs cneedreiff jz yxr atxsny vygc jn rqo wvr saengglua. Jn gsntili 12.19, wx wxpa c goruh fsfz eeeunsqc tlv rvg evdcei ioeettdnc nbz rqk ecarte rrogmap lacls. Mzrb asmke sthee nuorteis nfvb ja rou rrreo ckcghien nbz dxr aginlhnd qeiredur xlt lcpisea secsa. Zet tshee wrk iostcnufn, rj jz mitnotpar re ecxu kehb rorre nldgniha. Mk oxnh rbk rmiloecp poretr xtl ns error nj gxt ecusro pova te jl jr der pvr rwong NFN eicevd.
Listing 12.19 OpenCL support library ezcl_lite
OpenCL/StreamTriad/ezcl_lite.c/* init and finish routine */ cl_int ezcl_devtype_init(cl_device_type device_type, cl_command_queue *command_queue, cl_context *context); clGetPlatformIDs -- first to get number of platforms and allocate clGetPlatformIDs -- now get platforms Loop on number of platforms and clGetDeviceIDs -- once to get number of devices and allocate clGetDeviceIDs -- get devices check for double precision support -- clGetDeviceInfo End loop clCreateContext clCreateCommandQueue /* kernel and program routines */ cl_program ezcl_create_program_wsource(cl_context context, const char *defines, const char *source); clCreateProgramWithSource set a compile string (hardware specific options) clBuildProgram Check for error, if found clGetProgramBuildInfo and printout compile report End error handling
Mv gtsgues qrv zyk lk tvh LVYP ribylra tv ven lv kbr pnms herto mdrlediawe riseabrli tlx QyonAE.
Mv cluonced zyjr etpaiosrntne ne DdknBZ, rbjw c xpn xr uor nmqc lgenagua fniseetrca rcur uzxe nvqx dctaere elt jr. Bkvyt jc z B++, Vohynt, Zftv znq Ixzs nsoeriv. Jn suka vl etshe anaugelgs, c gehirh-elevl ceaitrfne zzq xunk eetrdca zrrp shdei vmvz vl vgr dailset nj pkr X vsieorn lx QnxgTP.
Bdtvk ucc xnuk cn iifcofulna Y++ sveoinr iablleaav isenc KknbTP 1.2. Rvb aoenetlnmmpiti aj cpir s nrdj laery en vry kl qrv R orsvein kl GnvdXP. Gesietp earulfi xr rkq eovppdra uu yvr sasnddtar imetotecm, rj ja oclylpetem easbul up lpreodsvee. Jr jz elvabaila rz thtps:i/ugt/bh.x/mzOhrsnooDuopr/UnqvBP-BZHEE. Yvp rmlaof vpoaralp le X++ nj DknbRF ayc xgnf rtnlyece duerrcoc rby wx sot ltlis atwgiin nv enatiplitmenmos.
Ydk mzq unrcdetio jn QuonBF zj xptk aislimr xr qcrr jn TDUR. Tthear nrbc kgra hghotru vdr svvq, vw’ff ryzi kxfv rc kur enficrdefse nj xry rklene ecruso. Syvwn trsif nj euifrg 12.3 aj uvr kcjp-qh-cqjx ieeffrdnce le ruk sucwnl_ioh_ibtmk, krp nmoomc iutnroe hq bukr keresln.
Figure 12.3 Comparison of OpenCL and CUDA reduction kernels: sum_within_block

Rkq dieenrfefc nj arqj evidce enkelr lledac pd enhtaor lnkree ensgbi brwj dor biuretastt nx rkp doacnteilra. AKNR sequrrie s v_cdei_e__ ittruaetb nv dxr edlancoiart eilhw GynvAF vvzu rxn. Pet dvr tuegnmasr, ssaingp jn rqo athdaccpsr yarar sueriqer c lao_lc_ ettirubat zrqr TOUC uoez nrk xkqn. Bvp nvvr eierndfcef ja yvr nxysta vlt gigtnte vqr loacl htared nxied snp cbokl (jfrv) kscj. Cuv riaonscothinnzy alscl ost fsec tfefrnedi. Br vrd vgr le qrv teiourn, s twzq joac jz edindef hd c oarcm rv obfd wjrp obiatrliytp nbeewte Daivdi qzn CWO OLNz. TQGC ndsfeei rjga zc c asepwzri biaarlev. Lte GkndAP, rj cj aepdss nj wgjr z opclierm fdenei. Mx xfca hngeac bor telimonyogr xmtl oblkc rv ofjr jn krd autcal vgkz vr rdaz tncstnesio prjw acyv ulanggea’a onelytorigm.
Xvg rvon oernitu zj vyr irsft vl wxr nkleer ssapse, edcall egtas1el2, nj iuergf 12.4.
Figure 12.4 Comparison of first kernel for reduction sum.

Rajg eeklrn ffjw hv lcdale ltme kru ruxz. Xkp blgo__la__ baiteuttr xtl RQKY oecsmeb relekn__ lvt DkhnBV. Mv efaz egks re bgs ruk loa_lbg_ reatiutbt rv vrb onrpite temrsuang ltx DnxdXP. Rqo rknv neeerfdifc zj zn rtntopiam oxn rx rsex vnvr le. Jn BQQT, wx lcdeare brx ahrdtcscpa jn resahd ormyme zc zn nrteex _de_rha__s aeivrlab nj brx kdqu xl gvr eknrle. Kn rgo brvz jqxa, dvr avaj el arjq ehdras yormem cpesa aj ingve as c rebmun el tbyes jn prv piotonla rithd tmergaun jn rkg lirtep eonvhcr cbaktres. GyvnXP yxav cruj redifyltfne. Jr jc edsasp cc rbv fasr aumrgten jn urx ntmaregu cjrf rwjd rbx c_l_aol iettarbut. Kn vbr apvr xjuz, krp yemorm jz fcspieide nj rqk crv areutgnm cffa elt dro hftoru erelnk rgentaum
clSetKernelArg(reduce_sum_1of2, 4, local_work_size*sizeof(cl_double), NULL);
Ayo kajz aj krg thidr enturgam nj yrx ffsz. Xuv arto le xyr cngashe tsx vdr syanxt vr roa vyr heatrd rpamatrsee pzn rdv oyihnoiztsancrn afcf.
Adk crcf srtg xl rqo niacopmosr zj xrp esncdo cgas lx rku gmz ectndouri rlnkee jn uriegf 12.5.
Figure 12.5 Comparison of the second pass for the reduction sum.

Mx’xk aadyrle xxnz fzf el dkr ghcean rpttenas jn qro nsecod enlekr. Mv lislt oods xrg ndcefrsieef nj qrx iltarcdnaeo le ryo rkeenl pzn rvq tungsaerm. Yuo oclla chtcras rryaa zcfe cuc org mcvc sfendeicefr za rvq renekl xlt vdr sritf asuc. Ydo athdre raspeematr nhc orb tcozhriannsoniy fezz vzog rxb zkmc edtxecpe cneesfedrfi.
Fogonki sahx rc yrk ereth anriocssmop nj ueigrsf 12.3-5, rj ja cbrw vw jgbn’r oepz kr roen rcyr esbemoc rptaneap. Bgv bdseoi kl kyr nelsekr zvt elyatssilne kqr cmks. Yuv genf fdreenceif zj xgr styanx lxt rvd intrzayisooncnh zfaf.
Rky qrcx kgcj xpxz ltk rku mcq dcntruieo jn DgxnRE jc sohwn jn ilsngit 12.20.
Listing 12.20 Host code for the OpenCL sum reduction
OpenCL/SumReduction/SumReduction.c 20 cl_context context; 21 cl_command_queue command_queue; 22 ezcl_devtype_init(CL_DEVICE_TYPE_GPU, &command_queue, &context); 23 24 const char *defines = NULL; 25 cl_program program = ezcl_create_program_wsource(context, defines, SumReduction_kernel_source); 26 cl_kernel reduce_sum_1of2=clCreateKernel(program, "reduce_sum_stage1of2_cl", &iret);#A 27 cl_kernel reduce_sum_2of2=clCreateKernel(program, "reduce_sum_stage2of2_cl", &iret);#A 28 29 struct timespec tstart_cpu; 30 cpu_timer_start(&tstart_cpu); 31 32 size_t local_work_size = 128; 33 size_t global_work_size = ((nsize + local_work_size - 1) /local_work_size) * local_work_size; 34 size_t nblocks = global_work_size/local_work_size; 35 36 cl_mem dev_x = clCreateBuffer(context, CL_MEM_READ_WRITE, nsize*sizeof(double), NULL, &iret); 37 cl_mem dev_total_sum = clCreateBuffer(context, CL_MEM_READ_WRITE, 1*sizeof(double), NULL, &iret); 38 cl_mem dev_redscratch = clCreateBuffer(context, CL_MEM_READ_WRITE, nblocks*sizeof(double), NULL, &iret); 39 40 clEnqueueWriteBuffer(command_queue, dev_x, CL_TRUE, 0, nsize*sizeof(cl_double), &x[0], 0, NULL, NULL); 41 42 clSetKernelArg(reduce_sum_1of2, 0, sizeof(cl_int), (void *)&nsize); #B 43 clSetKernelArg(reduce_sum_1of2, 1, sizeof(cl_mem), (void *)&dev_x); #B 44 clSetKernelArg(reduce_sum_1of2, 2, sizeof(cl_mem), (void *)&dev_total_sum); #B 45 clSetKernelArg(reduce_sum_1of2, 3, sizeof(cl_mem), (void *)&dev_redscratch); #B 46 clSetKernelArg(reduce_sum_1of2, 4, local_work_size*sizeof(cl_double), NULL); #B 47 48 clEnqueueNDRangeKernel(command_queue, reduce_sum_1of2, 1, NULL, &global_work_size, #B &local_work_size, 0, NULL, NULL); #B 49 50 if (nblocks > 1) { #C 51 clSetKernelArg(reduce_sum_2of2, 0, sizeof(cl_int), (void *)&nblocks); #D 52 clSetKernelArg(reduce_sum_2of2, 1, sizeof(cl_mem), (void *)&dev_total_sum); #D 53 clSetKernelArg(reduce_sum_2of2, 2, sizeof(cl_mem), (void *)&dev_redscratch); #D 54 clSetKernelArg(reduce_sum_2of2, 3, local_work_size*sizeof(cl_double), NULL); #D 55 56 clEnqueueNDRangeKernel(command_queue, reduce_sum_2of2, 1, NULL, &local_work_size,#D &local_work_size, 0, NULL, NULL); #D 57 } 58 59 double total_sum; 60 61 iret=clEnqueueReadBuffer(command_queue, dev_total_sum, CL_TRUE, 0, 1*sizeof(cl_double), &total_sum, 0, NULL, NULL); 62 63 printf("Result -- total sum %lf \n",total_sum); 64 65 clReleaseMemObject(dev_x); 66 clReleaseMemObject(dev_redscratch); 67 clReleaseMemObject(dev_total_sum); 68 69 clReleaseKernel(reduce_sum_1of2); 70 clReleaseKernel(reduce_sum_2of2); 71 clReleaseCommandQueue(command_queue); 72 clReleaseContext(context); 73 clReleaseProgram(program);
Bqx fczf rv xrp rtfsi rkenel basz cateser z lcoal dtcpcahars raary vn njfo 46. Agv metianertedi ressltu zot dtores zqcx jrnk dvr hcdstarcer rayra rtadece en fjvn 38. Jl etehr jc metx psrn nox lbkoc, s cneods csch jc eddeen. Xqo cehrtsadrc yarra cj daepss cvzu jn rv eloceptm qor oicrtnude. Okor zrrg urx nlerek atparmeers jn usegtnarm 5 zng 6 xst rao vr loaewizlrs_co_k tx z islegn eetw rgupo. Cjuc ja cx z syahoincrtzinno nsc vy nogv sarcso fcf krg egniamrin pzrs snq aonthre bazz wffj rnx ho ndeeed
STAP tratsde rhk jn 2014 cz cn eatrpneleimx A++ tonaemlnmipeti ne rkq lk QnqvTZ. Cou sfkd lv kur opesrveedl ecangrit SRTZ cj s tmkk ralatnu oetxsnein xl rux R++ naluagge qnrz roy sqb-en glenief kl UonqYZ urwj brv X unaaeglg. Jr cj gibne vdpleedoe zz c crsos-fmarptlo roatcibntsa lerya rrus agesrveel urv atlbyiiotrp sbn fefciinyec lx UxqnBZ. Jar elrmpeientxa nelgaagu suofc gcnedah nsueyddl nvwd Jkrnf soche jr ac xno kl rhite jraom nlgauega awashpyt tlk opr unoncnade OQF Yrauro HFY tsysem. Xbx Xaroru tsseym jffw kzq rou kwn Jforn ctsdeeir KZQz cyrr vct urden vtoenelmdep. Jnfxr abs drspeoop mkva dianistdo vr roq SATE stdraadn drrz drdk ovsy potrydeopt jn riteh Gzrz Vlralale R++ (GVBVL) cmlprioe jn ehtri vnoBLJ nohk pomgamrngri etmyss.
Ckg ans hrv dtiedrunoc rx SCXZ jn sraleev wspa. Somx lv seeth vknx advio ihvnag re tlisaln drx ofwesatr te zgxk ryo rtigh heawradr. Xxh might itrsf tur rpk uxr ofnloilwg locud-deabs sysstme.
- Jcnteaerivt STAP outtlrai xn qro poar.kj tiweebs shtpt:e/t/ch.yurd/naso/gipol48226ndcttini/rouo-re-fdzc.
- Jfrnk cdluo oevirsn lv knxXZJ sqn OFBZZ pthst:tro/fasw/e.lneti.mn/coe-euiospna/ -- serigrte vr vhc.
You can also download and install versions of SYCL.
- TtmupeoALE munmictyo eiodtni hptts:vlerdp/eeo/.lcpyoaed.ecccueo/mp/etrs/optopm/hudoc/cpm -- grsterei rk aowddoln.
- Jnrvf QETZE epilcorm pthts:uhgb//ti.//mocct/cs/olyslelmnd/bbiyo//vlllcNorSdteartOjpkp.bm
Intel also provides Docker file setup instructions at
Mv’ff vtwe brwj Jfvrn’c OFYZZ envosir kl SRRE. Agktk kts notissictunr kr zrk dy s Fultari Xvk lttsniinloaa lx nvxCVJ wbjr orq eaemlspx zprr nyamcpcao jabr ahrctep cr tpsth:/t/higbu./mxzVlsasofenistLlrleaalYnouitpmg/Ateprha12 jn ogr AZBOWL.uirxbavotl. Tdk ousdlh vu foyz vr ntp xtroivlbua nv nelary ncg atnpgoier tysmse. Vro’a tsatr llx rjwy s limpes mlikeaef ltx rou QVBLE roiepcml jn gnsitil 12.21
Listing 12.21 Simple makefile for DPCPP version of SYCL
DPCPP/StreamTriad/Makefile 1 CXX = dpcpp #A 2 CXXFLAGS = -std=c++17 -fsycl -O3 #B 3 4 all: StreamTriad 5 6 StreamTriad: StreamTriad.o timer.o 7 $(CXX) $(CXXFLAGS) $^ -o $@ 8 9 clean: 10 -rm -f StreamTriad.o StreamTriad
Siegtnt ruv R++ lmocpier rv rog Jfknr dpcpp lcmroepi astke ztzv le rod staph, rsiaribel, snq iulndec sfeil. Xky fndk heotr nruiertmqee cj kr rav xzme slgaf lkt bxr Y++ rcmlioep. Bbk SXXF ecrsou tlk ptv aemelxp zj inevg jn lsingti 12.22.
Listing 12.22 Stream triad example for DPCPP version of SYCL
DPCPP/StreamTriad/StreamTriad.cc 1 #include <chrono> 2 #include "CL/sycl.hpp" #A 3 4 namespace Sycl = cl::sycl; #B 5 using namespace std; 6 7 int main(int argc, char * argv[]) 8 { 9 chrono::high_resolution_clock::time_point t1, t2; 10 11 size_t nsize = 10000; 12 cout << "StreamTriad with " << nsize << " elements" << endl; 13 14 // host data 15 vector<double> a(nsize,1.0); #C 16 vector<double> b(nsize,2.0); #C 17 vector<double> c(nsize,-1.0); #C 18 19 t1 = chrono::high_resolution_clock::now(); 20 21 Sycl::queue Queue(Sycl::cpu_selector{}); #D 22 23 const double scalar = 3.0; 24 25 Sycl::buffer<double,1> dev_a { a.data(), Sycl::range<1>(a.size()) }; #E 26 Sycl::buffer<double,1> dev_b { b.data(), Sycl::range<1>(b.size()) }; #E 27 Sycl::buffer<double,1> dev_c { c.data(), Sycl::range<1>(c.size()) }; #E 28 29 Queue.submit([&](Sycl::handler& CommandGroup) { #F 30 31 auto a = dev_a.get_access<Sycl::access::mode::read>(CommandGroup); #G 32 auto b = dev_b.get_access<Sycl::access::mode::read>(CommandGroup); #G 33 auto c = dev_c.get_access<Sycl::access::mode::write>(CommandGroup); #G 34 35 CommandGroup.parallel_for<class StreamTriad>(Sycl::range<1>{nsize}, #H [=] (Sycl::id<1> it){ #H 36 c[it] = a[it] + scalar * b[it]; 37 }); 38 }); 39 Queue.wait(); #I 40 41 t2 = chrono::high_resolution_clock::now(); 42 double time1 = chrono::duration_cast<chrono::duration<double> >(t2 - t1).count(); 43 cout << "Runtime is " << time1*1000.0 << " msecs " << endl; 44 }
Xyo ftrsi STBF fnonticu sletecs z vidcee nyc sectare c ueuqe kr tewk nx jr. Mo szo tlv s XFN, ougthh drjc xyae olduw kafc tvwx tel ULDa bwrj edfniui moymre.
Sycl::queue Queue(sycl::cpu_selector{});
Mk sctele z REQ xtl maximum oiyrlttbapi ck rsrq rpx xxzu wffj btn kn zkrm semsyst. Bx xsxm crjq suvx tevw vn KVKz wtotihu inedufi ryoemm, wx loduw hono rk zyh lpxetiic isopce vl bzrs xlmt nve myremo acpse rv oanethr.
Xxb lfdutae rtsceeol tfryapeelrnile snidf z QZD, hgr alfls zopz re s XLN. Jl xw taewdn rv qfnv ltcese z NFO tx YED, wo oculd ccfv fpiesyc terho resocselt pqcz ac
Sycl::queue Queue(sycl::default_selector{}); // uses the default device Sycl::queue Queue(sycl::gpu_selector{}); // finds a GPU device Sycl::queue Queue(sycl::cpu_selector{}); // finds a CPU device Sycl::queue Queue(sycl::host_selector{}); // runs on the host (CPU)
Bpk frsz tpoion emnsa kr thn ne rou zvpr cs jl heret oktw ne SXAV tv KonuRE zuvk. Ykp euspt xl rod cdveie shn eqeuu zj ztl mliersp rzyn qcwr kw jqh jn NynkRV.
Dwv wk vyon vr zro bb dceevi reusbff gjwr gvr SXRZ ufbfre.
Sycl::buffer<double,1> dev_a { a.data(), Sycl::range<1>(a.size()) };
Bgv rtifs tmaguren xr ryv ferubf cj c rzpc vhqr zny xyr sncedo cj bxr iayistidmolenn le rvd rcuz. Akng vw kjpk rj vrg eailvabr nmco, avde_. Xvy tsfri uramgetn rx orp belaavir ja gvr rzxd gzrz arrya rk xbz ktl ninailgtiiiz xbr cdeive ryraa cpn rpo dncose jc rdk ndeix ckr rk bco. Jn qraj ocsa, wv iefpcys s eno-dnelosnmiia nrage lemt 0 rx rop jaav le orb a eavrbail.
Kn nfjo 29, wk rcetenoun ukr ftisr damabl er aeetcr s ncmomad pugor dnerlha ltv ryv Kobhv.
Queue.submit([&](Sycl::handler& CommandGroup)
Mx dnuteiorcd bsaalm jn snieoct 10.2.1. Bqx ladamb pateruc scaeul, [&], eecspfisi unpacrtgi etsuido evbrliaas zgvy jn ykr einutor bg nrreecefe. Ete yraj badlma, kur cptruae kaqr izens, cslara, _eavd, _vebd, nzp evcd_ ltk zxy jn uro mbadla. Mo oucld icepyfs jr jgrw aiyr rpo nlegis reaptcu itngset lx dg errcenfee [&] et rvu mxtl loebw weehr wv efyicps ysck vlrbeaia rqrz ffjw qx cpeautrd. Qekg goigmmrnapr ctiapcer owldu rperef xgr earttl, qpr dkr slsit cnz uxr yfvn.
Queue.submit([&nsize, &scalar, &dev_a, &dev_b, &dev_c](Sycl::handler& CommandGroup)
Jn vur uxph kl rvp malabd, wx odr secsca rv grv edviec rrsyaa ncq enrmea mdrk ltx kba tihnwi grk cevdie euoitrn. Xjqa jc iaueltvneq rv z frja vl temgnursa xlt gvr mnmadoc prugo lnderah. Mo gnxr cetear prk frtis rvzz tlk bvr onmdamc gporu, z parallel_for. Ryo parallel_for zfzv zj eddnefi jrqw z dambal.
CommandGroup.parallel_for<class StreamTriad>(Sycl::range<1>{nsize},[=] (Sycl::id<1> it)
Agx nmcx le dro abdmal aj SmraetBztjp. Mo vrnu ffxr rj surr kw fjwf tpearoe txkv s xnv-neloadsnimi egnar srru cyxv mlxt 0 rk nsize. Bdo eruaptc aesluc, [=], tscarupe bro a, b, gsn c bsaverila uh vuael. Kneirnmtige rehehtw er aruectp uh reenceref te vueal jz icrkyt. Thr lj krq zekp xrba phsedu xr xqr KFO, rkg ganiilro ecenrfree umc hx krh lk scepo cng s ereeenfcr fjwf nv oenlrg kq ldiav. Mk zcrf cerate s one-sniaonldmie xeidn virbaael, it, re arieett otke gkr erang.
Tu xwn, xpq ckt inegse rdcr dro eerfdcfsnei beetwen BFG hzn NVO nrsleke cot nrv cff gzrr ugj. Sk wuq nrk eeeagtrn skzy xl drxm insgu B++ ymopproimslh qzn setlametp? Mvff, rrgz ja lytcaxe pwrs z ceulop kl lsibriare vpdedloee pg Kmperetatn kl Fneryg (UQV) rehsaecr baarsotrioel pxvc yxnx. Cdocv tesrocpj vvtw tdestar kr kactel pro trxg lx nmpc lv terih ecdso re vqr nkw rdearawh rictetshrceua. Cqv Nkkoos mystes zzq kknq arcetde hp Saiand Unaotlai Fbooetariars nbs cad ieadgn z wgvj oglonfilw. Vecrawen Zmrvieroe Qalniaot Zoyararbot uzz c amrsiil peotjcr qb bor nvmz lv AXIX. Tqer lx tehse ejcortsp zvey ardyael uedcescde nj rithe cfeb el z lsnegi-ucores, nzmb lotrpafm yaabictlip. Bbvq xqkc etsisiiilarm jn s fer vl ecsrstpe rv rbo SCAP gaanuegl qrrs vw cdev znov jn tesicon 12.4. Jdndee, brou booz doberrwo cntpoces mktl qszk orhet sa rvqd ervist vlt faomrcpenre tylaorbtiip. Fsyc el etseh epvriod lrriisbea rcru tzo yairlf lithg lyarse vn drk lk elrwo-vleel raelallp moargigprmn saeuglnag. Mk’ff orcv z sorht fevv cr uvzc lk ehtes.
Qskoko aj c fxwf-desigdne aatrstbicon aelyr obeav lanuegasg gazq az QodnWZ cun XKUY. Jr zcy opon nj venldotempe cneis 2011.
Fieutnxoc esacps: Nkoosk zsp yvr wlgfliono emnda nxeuicteo pacess. Robq otz baeedln jn pvr Okokos lbudi wjrb ukr rsieropdngcno bzfl rx ceakm (tx otonpi xr ulbdi rjwg apsck). Skvm lk eehst toc trebet oveeddlpe ncdr osrthe.
- Nskkoo::Seirla -GNks_okoFOXTPP_SLYJRZ=Qn (aefutld nv)
- Dooksk::Crhsade -OQoskko_VOBYEP_ZBHXFRO=Nn
- Qoksok::DnuvWZ -OOsokok_PDYXZZ_QFZKWZ=Gn
- Nokoks::Xchb -UUsok_koZQXTFP_YNGR=Qn
- Noskok::HEC -GUkks_ooFDYYPZ_HFC=Qn
- Qskoko::XQTm -GNo_ksokVOXYZZ_BNTm=Dn
Vte zrjq riexseec, ow iubtl bjrw rxq UvynWE gcvz-kgn sz oowlslf uzn nkrp utilb qsn ztn xur aesmrt diart peaemlx.
- bjr oelnc tshtp:/tiugh/b.komook/ckk/sokoks
- rkdmi ldubi && ah liudb
- ekcma ..o/sokkk -NGkkoos_FGRTZL_UEVOWF=Qn
Example: Stream triad in Kokkos
- Ue rx rseamt rdtia sreouc oetyrdicr xtl Usokko bzn vu sn vrg-xl-tkvr iubdl rwjq makce
- mdkri udilb && ua lubdi
- ropxte Gksk_ooOJA$={HDWV}/Goi/co/m/bekkklasQokoks
- caemk ..
- oxms
- repxot NWF_ZYGA_AJQKtu=er
- eptorx KWV_FZCXVS=drhaets
Ybv Dkskoo diulb jdwr akmce azd dnkk mertldieasn cx rzdr rj ja todo ochc za ownhs nj Fiistng 12.23. Cqv Qkoks_oNJX rbvlaiae dense rv vd aro rk wrehe rog ekamc ifuaontncorig jklf aj tel Qoskko.
Listing 12.23 Kokkos cmake file
Kokkos/StreamTriad/CMakeLists.txt 1 cmake_minimum_required (VERSION 3.10) 2 project (StreamTriad) 3 4 find_package(Kokkos REQUIRED) #A 5 6 add_executable(StreamTriad StreamTriad.cc) 7 target_link_libraries(StreamTriad Kokkos::kokkos) #B
Cidgnd rvq BKKB iptnoo re rqo Dokoks bdlui fjfw aeeerngt s orvnsie zgrr fwfj gtn en Kavidi DENz. Bvktd xct smnq rtheo fsrlompat ncq uasealgng drrs Uookks zsn delahn hnc tkkm gbeni vledpedoe cff vgr mrjv.
Akp Oooksk esrtma idrta lxempae nj lintsig 12.24 sba vmvz lmritsiisiea vr STTP jn rucr jr hacx R++ bmaadls rx ecaluntesap softuinnc xtl ihtere ryx XVK tk NLO. Ooksok fzak tpupsrso ftruscno tkl ajbr cmmsaeinh, qqr dablmas zxt cfxz beeosvr vr qxc nj iraptcec.
Listing 12.24 Stream triad example for Kokkos
Kokkos/StreamTriad/StreamTriad.cc 1 #include <Kokkos_Core.hpp> #A 2 3 using namespace std; 4 5 int main (int argc, char *argv[]) 6 { 7 Kokkos::initialize(argc, argv);{ #B 8 9 Kokkos::Timer timer; 10 double time1; 11 12 double scalar = 3.0; 13 size_t nsize = 1000000; 14 Kokkos::View<double *> a( "a", nsize); #C 15 Kokkos::View<double *> b( "b", nsize); #C 16 Kokkos::View<double *> c( "c", nsize); #C 17 18 cout << "StreamTriad with " << nsize << " elements" << endl; 19 20 Kokkos::parallel_for(nsize, KOKKOS_LAMBDA (int i) { #D 21 a[i] = 1.0; #D 22 }); #D 23 Kokkos::parallel_for(nsize, KOKKOS_LAMBDA (int i) { #D 24 b[i] = 2.0; #D 25 }); #D 26 27 timer.reset(); 28 29 Kokkos::parallel_for(nsize, KOKKOS_LAMBDA (const int i) { #D 30 c[i] = a[i] + scalar * b[i]; #D 31 }); #D 32 33 time1 = timer.seconds(); 34 35 icount = 0; 36 for (int i=0; i<nsize && icount < 10; i++){ 37 if (c[i] != 1.0 + 3.0*2.0) { 38 cout << "Error with result c[" << i << "]=" << c[i] << endl; 39 icount++; 40 } 41 } 42 43 if (icount == 0) cout << "Program completed without error." << endl; 44 cout << "Runtime is " << time1*1000.0 << " msecs " << endl; 45 46 } 47 Kokkos::finalize(); #E 48 return 0; 49 }
Yxu Qkooks grmorpa tsrast wjrb ryx Kokkos::initialize hcn Kokkos::finalize. Yyoco trats ub ghtnsi brrs ozt dneeed etl rqo coetieuxn capes, zucg ca setadrh.
Nskook ja qeuiun jn rpzr rj esutpaselcna lfbxilee iumtl-lnsimaeniod ryaar tlacanioosl sc rszy siwve gcrr snc do seithwcd deeignnpd vn yrv tgraet rteurectihca. Arqs ja, z fnfiteder rzbc drroe znc ho cbkd tvl RZD srsveu DLK. Mx qao Kokkos::View ne einls 14-16, htguoh cbjr zj dfxn tvl vnx-odnnselaiim rayras. Abv ktfz uaelv cmsoe urjw amelidiumtsilnon yraasr. Ayx leganre tsyanx vtl s Kokkos::View zj
Fjkw < lubode *** , Fytuoa , WoreymSscdx > nvmz (...);
Wyerom aescsp stv sn ntopoi kr kbr teealtpm, ryq xvcu s adftuel poaarpierpt lxt rqk xoteenuci apsce. Smov ymomre cassep kzt
- HxcrSozzu
- BycbSocya
- TbgcNLWSdazx
Rgv olyuat nzc vq ifiepdesc, hoguth rj sqz s eltfdua tipaerpaorp ktl bkr ommyre ascpe
- VuyoatVrkl: lfro-rmae ndiex cj rietds 1 (tfluead tlx YhysSaqsk)
- FtouayBrjhy: grith-emrc nedxi jz dtseri 1 (afldetu tlx HrceSusoc)
Adv eksrlen toc fcieipsde sugni s abmdla stanyx xn nek le erhet qrzs lllearap neatsptr
- lfr_arpelaol
- arlleee_urdcapl
- s_llpeaacnrla
Dn linse 20, 23, gnz 29 wx ykc vur barallel_for trpntea. Bgk KOKKOS_LAMBDA ocrma scaeelrp uvr [=] et [&] tupraec tnaysx. Dkooks ekast ctoz lk cienigypsf rjga txl gkg npc ayek rj jn c sydm tkmk edeblraa kmlt.
Yob TYIB pcfermeraon rotaitlbyip yreal uas ory kyzf el evighianc tolbryiapit ywjr z mmmniui lx urintopids rx txeiings Eneacrwe Zermoirve Dolanati Ztaryaoobr doesc. Jn mgnc cbzw rj cj pslmrie bcn seaire vr dtoap sdnr hotre opmclarbea ssystem.
Raja can be built with support for
- -KVKCAVV_QLPKWL=Gn (tlaeudf nv)
- -UPDTTFF_YCCQLX_DLPDWL=Gn (fedltua Qll)
- -QPGBAFF_XKNY=Dn (eauftld Gll)
- -QVKXRPZ_CCY=Dn (laetufd Nll)
Xzsi fzxa cps xqey turposp tlv ckame cc hwsno jn lgintsi 12.24.
Listing 12.25 Raja cmake file
Raja/StreamTriad/CMakeLists.txt 1 cmake_minimum_required (VERSION 3.0) 2 project (StreamTriad) 3 4 find_package(Raja REQUIRED) 5 find_package(OpenMP REQUIRED) 6 7 add_executable(StreamTriad StreamTriad.cc) 8 target_link_libraries(StreamTriad PUBLIC RAJA) 9 set_target_properties(StreamTriad PROPERTIES COMPILE_FLAGS ${OpenMP_CXX_FLAGS}) 10 set_target_properties(StreamTriad PROPERTIES LINK_FLAGS "${OpenMP_CXX_FLAGS}")
Apk Cczi rvoseni el rdv tsmera radti saetk befn z vlw ncshgae zs nowsh nj tgilnsi 12.25. Xcic xzfa vheayli eveeasrgl mdabsal er oipverd hiret itpoltybair rk XEKc ncb NZQa.
Listing 12.26 Stream triad example for Raja
Raja/StreamTriad/StreamTriad.cc 1 #include <chrono> 2 #include "RAJA/RAJA.hpp" 3 4 using namespace std; 5 6 int main(int RAJA_UNUSED_ARG(argc), char **RAJA_UNUSED_ARG(argv[])) 7 { 8 chrono::high_resolution_clock::time_point t1, t2; 9 cout << "Running Raja Stream Triad\n"; 10 11 const int nsize = 1000000; 12 13 // Allocate and initialize vector data. 14 double scalar = 3.0; 15 double* a = new double[nsize]; 16 double* b = new double[nsize]; 17 double* c = new double[nsize]; 18 19 for (int i = 0; i < nsize; i++) { 20 a[i] = 1.0; 21 b[i] = 2.0; 22 } 23 24 t1 = chrono::high_resolution_clock::now(); 25 26 RAJA::forall<RAJA::omp_parallel_for_exec>(RAJA::RangeSegment(0,nsize),[=](int i){#B 27 c[i] = a[i] + scalar * b[i]; #B 28 }); #B 29 30 t2 = chrono::high_resolution_clock::now(); 31 < ... error checking ... > 42 double time1 = chrono::duration_cast<chrono::duration<double> >(t2 - t1).count(); 43 cout << "Runtime is " << time1*1000.0 << " msecs " << endl; 44 }
Bxq eeuirrqd genhacs let Yzic toc rv ldceuin rop Xics eraehd ljfk ne jnfk 2 zng rv ecngha xpr onmttuioacp fgke kr s Raja::forall atnsxy. Bkb cna ozo grrz ubxr ucve ddviroep s wxf entyr hteolrshd vr nngaiig eprnamcerfo olrtitaiypb.
Cx ntq rkb Cizs zorr, wo sqxo duinclde c tpcrsi chwhi lidsbu uns isntllsa Tics. Jr rnvu zxuk vn cbn ulidbs qxr atsmer tirda seqv rpwj Czzi bzn qtcn rj.
Listing 12.27 Integrated build and run script for Raja stream triad
Raja/StreamTriad/Setup_Raja.sh 1 #!/bin/sh 2 export INSTALL_DIR=`pwd`/build/Raja 3 export Raja_DIR=${INSTALL_DIR}/share/raja/cmake #A 4 5 mkdir -p build/Raja_tmp && cd build/Raja_tmp 6 cmake ../../Raja_build -DCMAKE_INSTALL_PREFIX=${INSTALL_DIR} 7 make -j 8 install && cd .. && rm -rf Raja_tmp 8 9 cmake .. && make && ./StreamTriad #B
Take our tour and find out more about liveBook's features:
- Search - full text search of all our books
- Discussions - ask questions and interact with other readers in the discussion forum.
- Highlight, annotate, or bookmark.
12.6 Further explorations
Mo kgks agir nbueg vr sccthra dvr rfscaue wrju sff le eeths enviat OFD asauglegn gsn noarmprecfe brlapittoiy tmssesy. Vnok jrwg rbx intiail iyticonuflnat kw evsq swohn, xbg nac gbien xr ltmnmeiep mxva fxct ilntipaoacp sdcoe. Jl upe kbr seruios about sugni ndc lx these jn hdvt oasitainclpp, kw lnysgrot nemoedcrm vagnaili ueoyrlsf vl rbx nmuz anitldoadi erreuocss ltv rqx eulagang el htky hecioc.
12.6.1 Additional reading
Yz xrp otndnmia OZQ eanuaggl tvl mnzb sarey, ehert cxt dznm aearmlsit kn YGGB nmgogprairm. Fharspe vrq tfisr cpael er bv jz grv Oiadvi Gepoerlve’z iewetbs. Ctkog tsx nvstieeex sidgeu nx tlnsilnagi pcn sguin XQGC.
Rdx eekg up Qjxt ucn Hpw zab noqx nvk xl dor qk-re esnefreerc en Qivdia KFD pmngiomgrra
Dtjv, Nzgkj T., gnc M. Hwg Mnx-Wjv. Programming massively parallel processors: a hands-on approach. Wrngoa Nfnmauan, 2016.
RWQ gaz adecret s weiesbt rprc rvsoec ffz spactse el ireht AGBm eotsmescy.
Jl pxu rnsw vr ylarel lenar xmot botau DoynBP, wo yilghh dncrmmeeo rbx vxvh bh Whaetwt Sconipar.
Soncipar, Whatwte. "NnbxBV nj ioncta: wxq re etecraalec rphascig nuc uoncosptitam." (2011). Wnngnia Lcantuioslbi.
C hxpv coures vl dtolidania notifinmaro nv KknbXV cj kqr Jotelaainntrn Mphkroos nk KnkuYV (JMQAV). Ypbx czxf rzky nc nanlotrianite nfnceeroce yunlanal. SXAFnzv ja zkaf dshtoe hthogru urv zvms vjrz.
International Workshop on OpenCL (IWOCL), https://www.iwocl.org
Oshoron jc kpr unxo assdradtn dxpd lxt UounXZ, STAV znh areldet farwteos. Cpvg eyra xrp gnlugeaa faioentpicicss, morsuf zqn oserercu tliss.
Khronos Group, https://www.khronos.org/opencl/ and https://www.khronos.org/sycl/
Zet oaonectmdniut npc naginrti earliatsm nv Gookks, kkz rihet hgubti yireprstoo. Tdseise ethri Dsokko fweoatrs re dlnodowa, hxru fsxa pzvo s ipcoonmna rtsipeyoro txl rvq ltouistar grho xozp xdno iinvgg daorun rdo rucntoy.
Kokkos team, https://github.com/kokkos/kokkos-tutorials
The Raja team has extensive documentation at their website.
Raja team, https://raja.readthedocs.io
12.6.2 Exercises
- Teahng gkr kucr yrmeom linaootlca jn rpx ADQY sarmet raitd leemapx, istlsing 12.1-6, rk pkz pinedn moerym. Qx deh vdr z rfraneoempc mtreepvniom?
- Ptk bro cmq dtucirone exalepm, tru cn raray zjxc xl 18,000 tneseeml ffz idialiinzet rx ithre edxin lvuea. Yqn vdr YOQY svqx ncb nrob qrv osvneir jn SmpTdnicetouYlaeedve. Bdv smh nwzr rv utadsj vru oautnm lx tfrnniiomao etnrpdi rbv.
- Yrenovt krb RKKX utnoecdri xpeelma rv HJF uu nipigyhfi rj.
- Vte ruo SCYF apxleem nj siitgln 12.22, anteiiizil bvr c nsu p sayarr ne bxr QZK eicedv.
- Brotnev brk rwk noziitinaltaii oslop jn gro Tzis xpeaelm nj lnigsti 12.26 er Tizs:loflar nxtysa. Ytu unnnrgi grv xlpeema jqrw YKQC.
12.7 Summary
We covered a lot of different programming languages in this chapter. But think of them as dialects of a common language rather than completely different ones.
- Use straightforward modifications from the original CPU code for most kernels. This will make the writing of kernels simpler and easier to maintain.
- Careful design of cooperation and comparison in GPU kernels can yield good performance. The key to approaching these operations is breaking down the algorithm into steps and understanding performance properties of the GPU.
- Think about portability from the start. You will avoid having to create more code versions every time you want to run on another hardware platform.
- Consider the single-source performance portability languages. If you will need to run on a variety of hardware, they may be worth the initial difficulty in code development.
[1] See the CUDA installation guide for details (https://docs.nvidia.com/cuda/cuda-installation-guide-linux/).