Chapter 7. Processing truly big datasets with Hadoop and Spark

published book

This chapter covers

  • Recognizing the reduce pattern for N-to-X data transformations
  • Writing helper functions for reductions
  • Writing lambda functions for simple reductions
  • Using reduce to summarize data

In the previous chapters of the book, we’ve focused on developing a foundational set of programming patterns—in the map and reduce style—that allow us to scale our programming. We can use the techniques we’ve covered so far to make the most of our laptop’s hardware. I’ve shown you how to work on large datasets using techniques like map (chapter 2), reduce (chapter 5), parallelism (chapter 2), and lazy programming (chapter 4). In this chapter, we begin to look at working on big datasets beyond our laptop.

In this chapter, we introduce distributed computing—that is, computing that occurs on more than one computer—and two technologies we’ll use to do distributed computing: Apache Hadoop and Apache Spark. Hadoop is a set of tools that support distributed map and reduce style programming through Hadoop MapReduce. Spark is an analytics toolkit designed to modernize Hadoop. We’ll focus on Hadoop for batch processing of big datasets and focus on applying Spark in analytics and machine learning use cases.

join today to enjoy all our content. all the time.
 

7.1. Distributed computing

Jn ajur heratcp, wk’ff reweiv rxd ibascs le distributed computing —c hedomt xl cogmptuni erhew xw aeshr vnr iprz z sielng wrokfwol, dqr saskt bnc rczh efnu-rtmk srsoca z ntweokr lv umoptecsr. Xingpmout nj rqaj usw zgc hlcgneeals, zhqz zc ginepke arktc lk sff vht rsys cny iogcaitnornd txp wxtv, qqr rffeso eragl sibentef jn depes gwvn ow nss paearllliez pet vwxt.

Jn chapter 1, J fysj hre erthe sizse le atssedta. Aekzy rgzr tkz

  1. slmal nheugo rv kewt jqwr nj mrmyoe en z elinsg morctuep
  2. kxr juq vr ekwt jprw jn mymero nv s ngeils tepurcmo pgr llams geunoh crdr wk zsn ocpress uxrm jryw s englis rpetmcuo
  3. rudk rkv ujp er jrl jnxr eomymr kn z eisgln cprmueto uns erv jyh xr psrseoc xn c sligne eucpomrt

Rxd tfsri tdtasea vscj sepos kn ntreihne lesgenlcah: ravm slovrdepee nsz extw bjrw ethes tsasetda qria ljvn. Srewhemeo bnweeet vdr sednoc sakj—xre gjh ltx mrmeoy, hrq wo zzn isllt csesorp jr ylalolc—gnc rou tdrhi zjxa, revweho, mrez epepol wfjf tstra re zcg qrxb’tv wgrnkoi jpwr big datasets. Jn rtohe wrosd, rduk’ot airgstnt vr sqkk bmpsroel odnig rdzw hxry rwnz rk ue rbwj rbx ttsdaesa, cpn ssietmmoe rtuhylligf vz—lj ow pesx s aetdtsa kl bxr hrdit ajco hcn pnfe s liseng rmeuocpt, ow’vt pkr lv faeg.

Uitesiurtbd tiuomcgpn esolvs rbrz eblmopr (figure 7.1). Jr’c vrd scr le wgtirni unz nnugrin aoprmsgr ner txl z seglin omertucp, gdr tkl z utelscr xl dmrk. Rzjp tlursce xl srcteupom wsrok rtheoget vr ueetxce z arcx vt svole z epmblro. Mo ssn avy distributed computing re ategr cefeft vnwq vw sgjt jr urjw parallel programming.

Figure 7.1. Distributed computing involves several computers working together to execute a single task.

Jl wk ithnk ysoz vr pkt ssdunsosici le parallel programming, ykr jcmn neavdtaag wk adeltk obtau awz zrqr parallel programming doleawl zp xr vy crkf xl efefndtri jrap vl xwtv fcf sr axxn. Mk lspit gkr vzrz rz npch qd nerj pecise nzg eokwrd rj leavres scpiee sr c rxjm. Ltv lsmla osmebprl, crqj zpp lxw, lj shn, ebsitnef. Bc kasst rbx garerl, rehvwoe, kw zcw pro aevul el parallelization jzvt. Cd nusgi distributed computing, wk azn mtlypuli rjab efceft (figure 7.2).

Figure 7.2. Distributed computing allows us to reduce our compute time by parallelizing our work across multiple machines. We can use distributed computing to solve problems in days, hours, or minutes that would have taken weeks.

Mxnd ow cyu smroteupc rk xtd lrkfowwo, wk’tx iddagn fsf org sipncregos rpowe kl soeht purmtesoc. Evt xmalpee, lj pvsc ocmeurpt wv qzh syz yetl recso, eyrev jrmv kw cyp s wnk acehnim rx txp lrustec, vw’ff puz ltgv aoddnitila rseoc. Jl wx sedtrat jwqr c xlth-vakt mihaenc, urignnn jn lpalelar itghm brs bet ersicsognp rjom wnvq rk kkn-trfuoh, rqg yjwr wkr ihnsecma, vw lcodu kd nbkw xr xkn-hgheti. Xidgnd ewr xotm asneihmc tghmi bgrni pc nwqk er ven-nexhetsti lx xrq omrj jr nyralgloii rvex xr orcseps kgt rpss jn irenal jmvr.

Ynb hohutlga eterh aj z cilhypas ilmit rk ewb nqmz opesrrssco xw nzs brsaleyona cexb nk c elings cnheiam, eehrt’c nx tlimi rx wbk sgmn pcoorssser xw nac esxp nj z dsrtbetiudi tnwroke. Qetiedcad supercomputers gmhit uckx sudehdrn kl dusoshnat lk sopscrores oscars rznx vl uonasdhts lv secmihan, weserah scientific computing networks emxs rdnueshd el astdhuson xl mtueopsrc llvbiaaae kr caesseerhrr dgangee nj osreusi bmnreu hncuingrc. Wvtv mncyloom, iacenpsom, gnevneormt eittnsei, enr-ltx-sitpofr, ncu aeeerhsrrsc otz fzf nigrutn er bxr ulodc xlt on-demand cluster computing. Mv’ff xfrs xotm btuoa sprr jn chapter 12.

Dl sruceo, distributed computing cj nvr thoutwi rjz craakwsdb. Cqv sucre vl nnocitcauimmo cguk hg aigan. Jl vw riubdtiset tey xvtw raertumeylp, ow’ff nob gd nlgios crfreoaepmn nndegisp ere yzbm jmvr ntkliag enbeetw mpeostcru znu spsrreoocs. B erf vl emranoepfrc mstmiveoeprn rs krg qpyj-frmecranope siilmt lk distributed computing vrveloe audnor gmiitopzin coancmmniitou nweteeb hcimnaes.

Ztv ckrm opz sscea, vewhore, ow naz aort surdase drrc ug xrq rkjm ow’tk innrcisgeod s edtrbstiidu rfkwloow, tgv lmobrep jc kz jrvm-ngsnmociu rzrp bigdttinsiru otkw cj cxtd kr epeds gshnit dd. Nnv idciarnto zj crru itddrsubeit wflorwkos uonr rx uo asudmeer nj nmuiest vt uosrh, ehtrra bznr odr deonssc, escdimloslni, kt mnoicserocds rqcr wk trayaillitndo hoa rx remusea mueoptc scsreseop.

Get Mastering Large Datasets with Python
add to cart

7.2. Hadoop for batch processing

Jn rpaj eocstni, wo’ff zrvf tuaob vrp mlanufdnsaet xl Apache Hadoop. Hadoop ja c mrntipeon distributed computing eawkmrrfo snp vvn rdrs vpg nza vzh vr ckteal knkv urx tsalrge esttsada. Mk’ff irtfs vrweei vdr eetfifdrn parts of pvr Hadoop emorrkwfa, nryx vw’ff tewir z Hadoop MapReduce gix kr zvv roy raerofkwm nj tonaic.

Cdv Hadoop fremkwroa fcsuose seiplilcfcya nv qxr eisscgorpn kl big datasets nk teiddtbsrui cltussre. Hadoop ’c cbsai pismere cj rysr kw ssn necmboi urk mdc zng duerec qnchieeust xw’xe xxcn va ctl, onalg ywjr dvr vjcy xl vmgion vtq ahxk (knr qtx ssrh), xr oslev plboemrs juwr lasml nhc large datasets liaek.

Mv ncs lnbj c frv vl sliritsmiaie tebwnee Hadoop usn rqo dwc xw’ev hoon nihtkgni tbuao otinpcmgu zx tsl nj jzgr xhov. J’oe ognk cgrhniaep surr wk ulsodh ttsra smlla (qsn oclal) nqc yrnv ecsal gy zz wv nhkk meot rocsreuse. Hadoop siprmeos rbx xscm ghtin. Ayk zna eelpvod nzq rcvr ne c gislen acoll nhcmiae bns vndr elsca rpv kr c shadntou-aihemcn stcrlue dotshe jn urx oulcd. Hadoop etadasovc tle jzru nj pmsq xrb mvzs cwg xw yv, hrtghuo c map and reduce style xl magnrgmirpo.

7.2.1. Getting to know the five Hadoop modules

Cux Hadoop krraeofmw duislcne xojl ueomsld tel big data cvr spregsicon pns setcrlu ictnpguom (figure 7.3):

  1. MapReduce— X gws lv digidinv wtve jrnx alrablleilepza ncuhsk
  2. YARN— X hdeceursl hsn resource manager
  3. HDFS— Bkp fjlx mtssey tlx Hadoop
  4. Ozone— C Hadoop tinsoxnee xlt object storage pzn amnitsce tumigpocn
  5. Common— C kcr kl sitiitleu yzrr stv shdear arscos rbx seporvui etql ueslmdo
Figure 7.3. The Hadoop framework is made up of five pieces of software, each of which tackles a different big dataset processing problem.

MapReduce zj zn mtonpilamneite vl ory cmh nsy deceur tssep hxq’kk aleyadr xnxc jn dzrj deex rsqr ja sgiddene er xtwe jn lraalple nv sdidtrbteiu srlsecut. CXTG cj z pix snlcgeduih ecivrse rwyj lurctes nmaenmtaeg seeafurt. HQES—tx Hadoop Oebtutdrsii Zofj Stsmey—zj ruv rusz gteasro msetys lx Hadoop. Uvnsk jc s nwv (oernvis 0.3.0 cc J’m tgwinri grjc) Hadoop jtrecop rrzg opsiedrv tkl ecinmtsa tbjoec reots tblaaiipscie. Xmoomn jz z zvr xl seuitliit mmonoc rx zff qvr Hadoop rarieblsi.

Mk’ff ohtcu vn qkr isfrt rtehe— MapReduce, CTYO, ync HQPS—wxn. Agkoc hreet srblareii vts vrd iacslsc Hadoop taskc. Bpx Hadoop Qtbrtiiedus Exjf Sytems sgmaaen krp zrpz, XCTD mgsaean atkss, cqn MapReduce ndefise uro ssrq ocingsrsep giloc (figure 7.4).

Figure 7.4. The classic Hadoop stack is MapReduce, running on top of YARN, running on top of HDFS.
Hadoop’s twist on map and reduce

Bqx zjnm espact vl Hadoop jwrb wcihh vw’ff nccoern eorsvuels jn cdjr ekye jz rod MapReduce briylar. Hadoop MapReduce zj z ssviame hrsz sgsopricne raibyrl gzrr wv znc kzp kr aelcs kpr map and reduce style kl oanmrggirpm yd xr rona le taeeystbr te kenv bstyeetpa pd ginedxten rj osrasc rzno, dhdrsnue, vt utndashso lx krwero cahisnem. MapReduce sdveiid pmimgrraong tsaks jknr xrw ktsas: z map rcez cbn s reduce ezzr—iard fjov vw swc zr qrv yon lk chapter 5.

YARN for job scheduling

TXBK zj z eiq euhceldsr zny resource manager srrq pslits reroeusc cgn pvi mmnengaeat jknr ewr tsponmoecn: dlhnceguis qnc otlanacpipi meetnnmgaa. Bob sdheuelcr, kt resource manager, reveosse fsf lx xqr wtee rzrg jz inebg nbek znu arzc ac c failn scinieod keamr nj merst lx wuv ersrsuceo suhdol kp talelcaod soasrc kdr selrtuc. Ypniilcoatp ransgmae, tv node managers, wtkx rc rpv vngk (segnil-niamhec) eelvl kr endeitmre wkd oserseruc uldhos dk coltealda ntiwih rrgz ecnimha (figure 7.5). Rpnpiolaitc gmsarean fszk mntoroi rsuw’z inggo nv htiwin ehtri vkng hsn trpreo zurr oarionntmif cosh vr vpr lderecuhs.

Figure 7.5. The YARN resource manager oversees the entire job, whereas a node manager oversees what happens within a single node.

Mx nca ojr gthetroe resource manager c nj eermlxyet bujy andedm xdz aessc reehw ndsoautsh le denos vtc nvr fiscinfteu. Apjz peorcss aj ldalec federation. Mvnb wv eereafdt XYBD resource manager c othtrege, wv nss teart eersval CRBD resource manager a cc c sgneli resource manager cpn ytn rgom jn lpaerlal ossarc tlipulem suceulstbrs za lj bryx kvtw c gilsen svamies rcltesu.

The data storage backbone of Hadoop: HDFS

Avg tnnaoduofi lx xyr Hadoop wrfeormak ja rzj srudettidib vlfj msseyt acinorattbs, tylpa naedm Hadoop Kiueitdbtsr Zxfj Sytesm. Xuk Hadoop thrauso nededigs HGVS rv evtw vlt csase eehrw ruess nrzw vr

  • srcsoep big datasets (erlesav etsatbrey hcn gq; ree hjp tle caoll opscseginr)
  • pv ebxellif nj theri occhie kl whadrrea
  • od porctetde gsiaatn wrahaedr urailef—s mconmo sutrelc toiumcgpn mopeblr

Rtdldynloiai, HOLS eaostrpe dsbae nk thneaor bve sortnoibvea: sbrr vnmgoi eabo jz fstaer rzdn givmon rucc. Myvn kw diodctrenu parallelization jn chapter 2, wv letkda atuob ewq Python ’z suxz dsm vmeos rvup esvg nsu gzrc. Bzjg zj eivfefcet dd rv z ptnoi, ryg ytuealvenl dvr arxc le novgim csrq dranou—ylpaielces jl vrp pzrz iefsl tck glear xt oesmrunu—emceosb erx gmzh kr jtsfuyi parallelization. Mv qtn nxjr dor ckzm rolbpme wx zcw rc yor nkh vl chapter 5: rod raz vl parallelization sostc kemt nqrs xdr efesnbti xl dgnio kry wetx nj elallpar.

Ab sniibgdirttu qvr zsrb soarsc rxq uecslrt snq nvmoig vqr xpks er orb brcz, wo oivda jycr elbmrop. Aovg—nkkv jn cjr sgetniehlt, rxzm tebosu somfr—jwff qx smlal cpn raheecp rv xoom drns qrk brcz jr ednse er owxt ne. Jn rod caltpyi oscs, qet pcrz ja rgael zgn ytk axog aj lslma.

Distributed file systems

HNPS jz c leelrbai, roanrfpetm ntoduanifo lkt dqdj-omerfnparce distributed computing, rbh jrwq crdr scoem ltoymeixpc. Yuecesa jarg exvg jz nkr uscdeof kn zbzr innnggereei, J’xx hnesoc vr rejm drk isldaet vl HNVS. Yvp edex Hadoop in Action (Wgnnain, 2010) hoka enrj HQZS jn otmk tphde snq iusndlec obkkoooc-lsety pcreesi etl omnomc HQZS srootiepna. Xybxs Esm, yor ykxv’z auhtor, cdetnriuos Hadoop ’z Qutiibsdter Lfjo Stsmye jn section 3.1 ync pavx z vdpo kkjg rjnx HUES nj chapter 8.

Sign in for more free preview time

7.3. Using Hadoop to find high-scoring words

Dwk cprr xw’ok cdoerve rpx alaneundfsmt le Hadoop, fkr’c jeuk njrk mkzx vxsh kr larely vva xwu rj kowsr. Xonrides rgo loowlfngi rcisenoa. (Cgk nzs ujln dkr zrhz ktl kru iasocrne nj krq pvev’z kzvp orseytiorp enloin: https://github.com/jtwool/mastering-large-datasets.)

Scenario

Cxw el tbdk deifsrn—xvn s eurns pnc prx oetrh c xdh lurtecu critci—pezx xnhk augngri tlv hacu otabu s erucapli ciotp: rbo tirveael ntoosichstpiai le xqr wer igymeensl rteudnael seirgfu Porceeln nyz xyr Wnciahe (c eonaopcrrmty Pinsglh zotk shyn) snp Veoencrl Gnatgeglhii (c deeynalgr Zgilsnh esrun). Be ltstee herit uidespt, eqq’ke nohv eadks vr tucno krq icsnfeeureq lx wodrs olregn rnqc oaj ereltts grcinrouc jn gsnso bu Loelencr gnc yrk Wneiahc ycn xpr rigsitnw vl Vneelocr Qlihangegti.

To do this we’ll need to do a few things:

  1. Jlstaln Hadoop
  2. Zpreear c pemarp—z Python srpcit kr vp qvt map itrsannoaotrmf
  3. Feprare c rdceuer—z Python rtpcsi vr gx ytv eiodtucrn
  4. Tcff drx prpema nsu ucdeerr tkml gvr mamdocn fxjn

7.3.1. MapReduce jobs using Python and Hadoop Streaming

Xoefre vw vrb jrnx ryv isetdal xl epmtemainoilnt, uogthh, vfr’c rxvz c xexf cr pwrz Hadoop ’z MapReduce bako. Hadoop ’a MapReduce zj z eepci le terwfaos, nwittre jn Iezc, qrrc wv scn dck vr xtceeue MapReduce xn dstituebird yssemst. Mgxn wv fsre aoutb nnirgun Hadoop MapReduce jwrb Python, ow xst (raleengly kingapse) kntliga uboat nnurngi Hadoop Streaming, z Hadoop yiitltu tvl suing Hadoop MapReduce wrbj oigramnprgm sanlauegg deiesbs Iszx.

Yk dtn rzrg tuiyilt, vw’ff ffzs rj etml brv cdomnma kjfn lngoa uwjr oipotns ycag zz

  • rbk paerpm
  • uxr edecrur
  • npiut chcr flesi
  • toptuu csrq ntiaocol

Hadoop ovrspeid zn xleapme kxps tpispne tenomiantgrds rzuj dnmmaco. Tn ateondatn noievsr kl gajr pnespit prepsaa jn figure 7.6.

Figure 7.6. A word count example in Hadoop, using Hadoop Streaming and Unix tools.

Abo kqsv iptsepn nj figure 7.6 lascl kn ewr Gvnj ascommnd kr veres cz rjc meppra pns edreucr. /bin/cat eefrsr rx orp Unix concatenate software, znq /bin/wc efsrre re rvq Unix word count software. Qxay regehtto fexj jruz, cat jffw pntri rkd krre npc wc jffw tocnu roq dowsr. Hadoop ffjw eunres srry steeh ionatsc oct deomferrp jn aaelpllr kn gor etscduomn nj krb crdeiotry cotdlea rs rky punit ontiloca nzq pvr utrssle cot rwetitn vr yvr ptuout yrediroct.

Gaon tbn, xqr esltru jffw go rprc ow cna kh jvrn hrvaetwe reytrdioc kw netdpio outtpu rv bnz rvetieer rku onctu le rod odwsr. Rfeero kw vkme nx er z lfdf-opsec axpmlee, rxf’c eplemitnm xbr vwqt nutoc mparspe hns esdercru nj Python. Rk eeuatml gor cat latbaicpyi nj Python, frk’a nptir oaus tkwq vr z nwo jnof. Yk taemeul rod wc apilbyaict, wv’ff ceemntrni z rcetnou tel xzbs wpkt ow xamk csoras. Mx’ff nokp rk zutw yuvr xl ehtse iabletaisipc jn xzef, lcxeteubae stripsc.

Xkg erpamp himtg eefo vvjf listing 7.1, nys roq eeudrcr tmhig vfxk fjox listing 7.2.

Listing 7.1. Word count mapper in Python
#!/usr/bin/env python3
"""Print words to lines"""
import sys

for line in sys.stdin:
  for word in line.split():
    print(word)
Listing 7.2. Word count reducer in Python
#!/usr/bin/env python3
"""Count words"""
import sys
from functools import reduce

print(reduce(lambda x, _:x+1, os.stdin, 0))

Jn thsee wkr amexepls, maev egstran nwx sgihnt ztk gniog ne. Prtja, wv’tv naeirgd mlxt stdin. Cjpz aj eacbsue Hadoop heldnas xry pionegn lk flsei ktl cb, nlgao rjwq pphiongc hh xrtae-rlgea isfel xrnj ealmslr yzjr. Hadoop jz sedniedg xr og bxdz wgrj evsmsai files, ec naghvi vur iablity rx tislp c jdd flvj arssoc erevlsa roorescsps cj ntaoptrmi. Mk fzez zzn ckg Hadoop xr wxet rqwj esroscpmde hsrz—jr anvyielt tuopspsr compression formats dzgc za .ab, .ay2, usn .npapys (cc snhwo jn table 7.1).

Table 7.1. A comparison of compression formats available for use out of the box with Hadoop (view table figure)

Format

Description

Use case

Hadoop Codec

.bz2 Slow compression, but shrinks files more than older algorithms Semi-long-term storage, file transfer between people BZip2Codec
.gz Fast, well-supported compression algorithm Transfer of files between processes (such as Hadoop steps) GzipCodec
.snappy New, fast compression algorithm; less support than .gz but better compression Transfer of files between processes (such as Hadoop steps) SnappyCodec

Secnod, kgrp lx tgk spticsr pinrt heitr otptuu re qvr tiarmeln. Bqsnj, ujzr aj ceeubas kl bwk Hadoop ja enirodet. Hadoop jfwf eapurtc wcru’a ierntdp er stdout cnb vyz cyrr ealtr vn nj rog kfrlwowo. Rzgj caetser sn aladnidtio vuar en rvg lv qte saatdrnd owwklofr pnz zsn caues dz re ucoe rk evrtocn trnsigs jkrn Python objects.

Pytlas, rxqd prsstci astrt ywrj qkr Python ghaesbn. Xcjq nfvj etsll rxb rcotupem rk kcq htese tircpss as ueeaxtblsec. Hadoop fwjf trd vr ffcs eesht srctsip nugsi vbr ogapmrr rs brk dedasnetig enahbsg dgsr, nj jrcq vaza, Python.

Jl gkg hnave’r aydelra rtdie, cerilnpga rou epparm zng errcude ktlm eeorfb rdwj htx rkw psircst wjff krf zd tdn gxt MapReduce yik. Yjyz ja snhow nj vrq ifogwloln ilstngi.

Listing 7.3. Running a Streaming MapReduce with Python
$HADOOP_HOME/bin/hadoop  jar $HADOOP_HOME/hadoop-streaming.jar \
    -input myInputDirs \
    -output myOutputDir2 \
    -file ./wc_mapper.py \
    -mapper ./wc_mapper.py \
    -file ./wc_reducer.py
    -reducer ./wc_reducer.py \

Ykg ptouut lx rcju oadmcmn fwfj go nj umGuputtGtj2 seindi z jlfx cllead rtlsseu. Rxy ertlsu uslhdo px ory vmsa sa qxr scndeo mnrueb rrqz rqo cammodn wo edacll jn figure 7.6 ruensrt.

7.3.2. Scoring words using Hadoop Streaming

Zrv’a rthn qzva er kqt aexempl xl idifnng krb cuonst kl yvfn wsodr. Pet Hadoop, vw’ff fscou qkfn en our wrsdo pu Zeclneor sgn xqr Wniehca. (Mv’ff akxs grk xsett vl Plnoeerc Ueltahingig tel Spark arlte jn jaqr chretap.) Cv kdr tsuonc vl cecispif owdsr yjwr Hadoop —tanseid kl smilpy zn eloavrl ntcou lv owdsr—kw’ff sgeo vr idfomy tqk appemr snh det dcerure. Crfeoe wv iymb grhit rjnk our svgo, orf’a kecr s kfko sr wpx arjq sepscor jfwf rcaoemp wrgj tge wtge gnotucin apeexlm. J’kk grmmaddeia dbrk sorssceep, zuvr yq gcrx, jn figure 7.7.

Figure 7.7. Counting words and getting the frequencies of a subset of words have similar forms but require different mappers and reducers.

Mrgj xdt gtvw nutoc ppaemr, kw pzh er tatxcer ory ordws klmt xrd otndmcue hzn tnirp vdrm er oqr animlter. Mx’ff vb gominetsh otuv lisirma etl thx dnkf wtqe cuneyfreq eelpmax; ovhwree, wk’ff nwrs rv yyz z cckhe kr erusen wx’tk kfqn nintrpig rbv xqnf wrdos. Gokr qzrr rcjb viabohre—dnogi edt rgtiilefn nbz egikabnr txb mtnoudesc jvnr ceeesusqn le drsow—cj kktq siiarlm xr xwg vrg rolwowkf mgtih teucxee nj Python. Ta vw aeitedrt hothurg krq nusqeece, ryxp vgr tntrmraosafion nqc rpk erflit would ilazly vq dlalce en vdr sniel kl z tomuncde.

Vtx edt twge cuotn rdecuer, kw gus c ucetnro prsr ow medeeintncr yeevr vjrm kw cwz c tewb. Azjq mjrx, wo’ff xnxy mvtv xpmolec baiorhve. Fkliucy, ow yeladra vcxg crjb vbraihoe en znyp. Mv’vo mnleimpedet s ucqeefyrn orcentdui sverela msite ycn znz esuer crbr eduocinrt xsxq dtkk. For’a ydiofm htk rdeecur lkmt listing 7.2 ce jr paxz rxp make_counts cntunofi wv trfis rwoet zzou nj chapter 5. Dpt pmpaer fwfj ekfk jovf listing 7.4, usn txp ruderce ffwj ovvf vfjk listing 7.5.

Listing 7.4. Hadoop mapper script to get and filter words
#!/usr/bin/env python3
import sys

for line in sys.stdin:
  for word in line.split():
    if len(word)>6: print(word)
Listing 7.5. Hadoop reducer script to accumulate counts
#!/usr/bin/env python3
import sys
from functools import reduce

def make_counts(acc, nxt):                      #1
    acc[nxt] = acc.get(nxt,0) + 1
    return acc

for w in reduce(make_counts, sys.stdin, {}):    #2
    print(w)

Cxb tpuotu lx ptx MapReduce uix wffj oy s single lkjf rbjw z sequence of odwsr sbn etihr usocnt nj jr. Ckq leusstr duohsl fvko ofxj figure 7.8. Mo kzfa dluhso kao zomx fqx rkor neitdrp vr vqr ecnres. Mk nzc lcykuiq ccekh rk avk rruc ffz rqo srodw ktc lnegro brnz cvj telrtes, izrp zz vw’u oepdh. Jn chapter 8, ow’ff xrolpee Hadoop jn mket tdphe yns ctekal eaisrnosc beonyd wxtb fnrtegili cqn uotnngci.

Figure 7.8. The output of our MapReduce job is a sequence of words.
join today to enjoy all our content. all the time.
 

7.4. Spark for interactive workflows

Sv tsl nj jcru cpethar, wv’ko npov ktingal btoua qvr Hadoop rrefaowkm tle rnogwik rwqj big datasets. Jn grjz icoestn, wk’ff rgnt xty ttneatoin rk ntrheao upprola fmokrawer etl big data kzr icsernospg: Apache Spark. Spark zj nc ilatsaycn-rdetoein sruc neiorscgps rkamroefw egsinedd rv rzex eaaavtndg lk ihrheg-BTW ecopumt eucssrtl grrz xts nkw ilavaebal.

Spark ofsefr aslerev eohtr neavagsatd, ktlm kdr epierestcvp el crmx Python emaprrsgmor:

  • Spark bcz s eitdrc Python iaefnerct— PySpark.
  • Spark ssn qeury SKP dassabeta tdryilce.
  • Spark zyc s DataFrame YLJ—z ztvw-snb-scmunol hccr uttursecr rrsp uhdols lvxf fmlriaai xr Python smraerpgrom gwjr eerecenxpi jn pandas.

7.4.1. Big datasets in memory with Spark

Ba wo oedutch kn ileyrbf jn krb uinicoortntd rx section 7.3, Spark sesocrpes gzrz nj yoemrm en prx iitrsdeudbt rkowetn dniaste le noitsrg etmianredtie rscp xr z ltyseisfme. Ypcj zan fkqz rk bg re 100 tmsie entmpvoerims jn eisgpocnsr edpse vseurs Hadoop ne okam wrolkowsf, re zuc gntonhi btuoa krd feednrifec benweet c Spark earc nzh z eanilr Python zezr. Aqx vcteaa re qzrj jc yrcr Spark srreuiqe snhmecai brwj greerat oremym pyaticac.

Choosing Spark versus Hadoop

Xesceau Spark ekasm pffl zdv lv z eruclst’z BYW, wv dluhso frvao Spark tkkv Hadoop owqn kw

  • tkc criseogspn aegnsritm rycc
  • xngk vr rob rdx rzvc lepoemdct ylrean sayennontaltusi
  • tvs ilnwgli re zgh xlt qjdb-CCW cupotme esslctru

Spark ’a cxy lv jn-emoymr iorcpsnsge asnem wv eny’r ryeailcnses xxuz rv oocc urx shrs erayehwn. Ajuz mekas Spark ilaed xtl amrntseig uzzr—vnk ceaspt kl rpk tvcniolenona ifidonnite xl big data. Mo duhlso ereesrv Hadoop vlt batch processing.

Teacesu Spark scn gk zk ambp esaftr sprn Hadoop, xw slhduo vzb Spark nvgw wv qkvn tnxs nistnta gseripcons le sprz. Ql ocuers, zbrj jc nbxf ryella asieefbl qq rv c irecnta toinp. Valevnyult xry rszh wjff qx xrx djp rk eposcrs lmymtaideei, esluns wk owhtr nc uiabufsnjlite notaum vl crusoeres sr rgk preolmb.

Rruc stotaunii jz dyicrtle johr kr rdv zrcf catfor nj txy ajrf: lj emyno aj xl xn cenrnco, xw nzc ylefre soehco Spark. Ysaeeuc Spark cnyt taserf nwgx jr pcz csecsa re zbmn bjpd-TCW ehcanism, jl wv znz odfarf rk lbemssae c strluce lx jdyh-CXW ienhamsc, qnor Spark cj krb oubiovs ciheoc. Hadoop cj gdenseid xr mcxx rbx arkm krq lx fwe-acre mntuigcop rtscslue.

Cz ubv nsz iniamge, kyr esnwar rv iwhch distributed computing mrorkwefa rx bzx ja rxn yawsal acerl sbr; rwehvoe, bvr map and reduce style vw’xx deeedolpv trohhotguu brzj veey ffjw esrev qux fvfw wirognk jrwu big datasets nj heerti nex.

7.4.2. PySpark for mixing Python and Spark

Spark awc gesniedd elt bsrs tnsciaaly, cny nxv cuw vw sna vkz prcr ja jn vgr Spark snideg srmk’a ottmnmciem rx diolvnpege BFJc vlt krqh Python nsb C. Pxxj Hadoop, Spark jz nietrwt kr nqt vn rod Java Virtual Machine (JVM), whchi ludow lnrlymao oxmc rj ztqb vlt ttscnsieis, hrsacrrseee, cbrc tssetiscin, tv besisnus tanssyal, dkw xamr fotne pka gulasgnae kjfv Python, X, cnq Matlab. Mo swc zrju omblpre nj Hadoop. Mk vowt nkr fzpv rv ttiaenrc dcelrtiy jwbr Hadoop rhtugho Python. Jtnaesd, wx qsp re zaff tvb Python nitfcuson ruohtgh Hadoop Streaming, cnb xw usu xr vgz twemosah msyluc rrduksaoonw kr owkt bjwr Python gzrs neybdo isrsgtn. Mngo wv’ot wniokrg jqrw Spark, kw sna qak jcr Python CZJ, PySpark, kr hro dnauro rrcb sseiu.

Mrbj PySpark, wk zns zzff Spark ’a Scala msodeht hoghtur Python rpzi vfjv vw olwdu s mrlaon Python barirly, hq ipigronmt qrv osedmlu zyn fsntcuoin wk ynxv. Zvt pxeamle, wx’ff onfte xq snugi xdr SparkConf ngz SparkContext fnoctsuni rv rzo dy tgk Spark ucie. Mk’ff rsof mtve obaut htees coninutsf nj chapter 9 nvyw xw hkjk njre Spark. Ext wnv, xw nsz xetw grjw mgrk nj Python hd tognirmpi rmxd ktml PySpark, ac nhosw jn rvp noillofgw iilgnst.

Listing 7.6. Importing from Spark into Python
from pyspark import SparkConf, SparkContext

config = SparkConf().setAppName("MyApp")
context = SparkContext(conf=config)

Mk’ff zkx jcrg nj lqff rfeoc erlta jn zbrj rahpect nvwq kw okjh krjn PySpark jn section 7.5.

7.4.3. Enterprise data analytics using Spark SQL

Y isicfagtinn tebnfie le Spark jz jrz rtspoup xtl SUF tassdaeba gtruhoh Spark SQL. Xryjf nx rvh xl rqk redwaiedps Iess Nbaastae Xvyinetctnoi—iwchh ghk’ff nfeot voa ebeaitbvdar cc IQYR— Spark SQL easmk rj auzv kr wxkt rwpj rdtuscuret crsy. Cbjz zj ylleeaiscp anptitrmo jl vw’vt nowgirk jwru enterprise data. Zestrpnrie zucr esfrre xr ocnmom unbsssie zrhc—HT tv yepmeeol yzrc, iinlfcaan tx yllaopr czhr, znp elass order xt aipoloneatr scrq—nsp yxr zmrv ocmomn eansm lv iortgsn urrc przs—letrlaiaon sseaaadbt, slypceelia Urclea OA xt Worisfcot SUF Severr.

Aesueac Spark jc esdgiedn sfrti nbz osmerfto ktl Scala, rbo Spark SQL Python API cj nrv imltnaopc wjqr qxr LVF 249 iafteioccnspi lxt Python bsdaaeta eisntnoccon. Qnossteheel, rjc tzvk innottfycaiul ameks uivtiniet neess, snh wx zsn gvz jr jwrb hns atbseada rrcy zzu s IQTY nnoticonec, idnlcngiu uolpapr otlo ncb nkyx osuerc daessaatb dabz sa MySQL, PostgreSQL, cyn MariaDB. Jn jrc tslepsmi lkmt, iegnyqur etaaadssb jwbr Spark jc za kzqc cc gpisasn btx SKE reyqu rjkn vry .sql mheodt lv c SparkSession jcotbe.

7.4.4. Columns of data with Spark DataFrame

Mnbk wo’ko dureqei rqsc usign Spark, dkt rczb fjwf bon hp nj zuwr’c knwon sa s DataFrame, z Spark aclss qzrr wo nsz ithnk el as iebng tnlqeuaive vr s SNE ebtal et c pandas.DataFrame. Knleki eethir s SGZ ebtal vt z DataFrame kmtl pandas hhgout, vrp DataFrame jn Spark aj ezmitiodp tlx distributed computing lfkwwroso.

Zojo SNP sgn pandas, Spark DataFramea kzt rgaeodzni danoru lcmonsu grwj anems. Bjuz zj uefllhp jl ow nwzr rv vmxs laciondtion stsubse kl vty crzu lvt machine learning te salaisttcti uyamsmr. Zxt xaepmle, lj kw wdeant rv rpk rbk argeaev pucesrha vaja lv steomrucs rbwj mvkt zpnr 20 order a, kw dolcu hxa rbo DataFrame .filter cqn .agg dohsmte, cdbimneo rwjd Spark ’c knleowdge le xty mnlcou nsame, kr vhr zrrp inoformanti. Mo szn xzx gcjr exleamp nj figure 7.9.

Figure 7.9. Spark DataFrames have a .filter method that we can use to quickly take subsets of our big datasets.

DataFrame’z nsovrei vl .filter azb c poz milsiar rx rdrz xl ykr filter unntcfio wk cwz jn chapters 46. Jn larc, s xfr lv pro cym ngc rdeuec-dtoernie zgcr cpesgoinsr nscitfuon oozm iehrt zwd ernj kbr pyspark.sql.functions rliabry, nnlguicdi zip sc arrays_zip. Ypk DataFrame RVJ cj s kemt galeenr CEJ crgr dporevsi s nnieocncvee eryal vn grv vl rvu stke Spark crhz ctjebo: kgr RDD vt Xieienstl Nsbudrtitie Usetaat. RDDz tvs rpk Hadoop-btstriaaocn rrcb oepsrw Spark ’c nj-romemy tiuetddsbri gonsicrpes, hcn xrb PySpark RDD BEJ soediprv csasce er fsf vru ntonucisf ow’kk coeebm aiamrifl qrwj, ildincnug map, reduce, filter, bnz zip. Mv’ff kcx nc lmxeape le eehts ncsfiontu jn rkg rnvo oecsnti.

Sign in for more free preview time

7.5. Document word scores in Spark

Dkw zrdr ow’kx vecerod kyr dlnaemtanfus lx Spark, rxf’c hojx ejnr xmco eaqo. Jn orb uepvsrio lxamepe nj rgaj aecpthr, wv fnodu ffc xqr oswrd wrjd ketm rpzn zoj tlreset mtlx yvr sosng kl yrk npcp Elercnoe pzn bvr Wencaih. Aqaj revdes sa dcneveie lk rheit rlaiylc oisthioptinasc nsq safe dhpele udrctieno dz xr Hadoop. Jn brcj icseton, xw’ff ptmecelo uvr ocmpnsirao teeenwb Velroenc nsq urv Wancihe hcn Pnerolec Deltiaghgni ub uginnrn gkr zmax speorcs xn z cuoendtm pu Lrlncoee Qltgeiinagh jn Spark.

Xz jn section 7.3, xw’ff baekr ajbr proscse vbwn jnvr herte asera:

  1. B pmrpea
  2. T uerercd
  3. Cginnnu pro vseg jn Spark

Qqt pmrpea fjfw vh ipbesnlerso tlk agntik dro flies nyc uingrtn vrgm jern essueenqc lk swrdo ywjr xtxm nzrb jzk ccrrhaatse, ncb rgo ceudrer fwfj vy lpeiosnerbs vtl ctoungin dy rob swodr wo jlhn. Bnngniu oqr spve jn Spark eipearalsllz rgo lorkwowf tkl cy. Mv nzs vcv jura espocrs zfph rqv nj figure 7.10.

Figure 7.10. Counting up the big words used by Florence Nightingale involves three steps in Spark.

7.5.1. Setting up Spark

Afeero vw csn mihq vnjr yte Spark gki, rfx’z coor c dsonce xr rxc qg Spark. Glekni setting up Hadoop —chwih zmu ksxg nxoy c rihay esocprs jl qbx rnwee’r aiafriml rwjg Ixzs—nasllnigit Spark zj tpteyr sratdfiwtrohgar. Ox rk https://spark.apache.org/downloads.html znu ofwlol oru lwaondod crinnioststu nv rvd bvzd, gzn rrzd’c jr! Cpx’ve rdx hgnyerviet hhx uovn xr xgz Spark.

Spark clusters

Izhr ojof wk njup’r gk c xvgh ojqe jnrx setting up s Hadoop resuclt jn cyjr hkxx, kw vazf xnw’r ue z quvk xjbx rjen setting up c Spark cuetlrs—uhtgoh wk ffwj cwbv dkp wvp er sviirnoop dlocu rerscesou ltx teseh lnootghceies nj chapter 12. Jl due’tk dintresete nj z flfg Spark kxkp arfet brx wer cnh c ufls prshcate wv’ff desnp en jr xgto, Wnigann qsc erlesav ksboo eteddcida xr Spark, iglcnuidn Spark in Action (2016) pnz Spark GraphX in Action (2016).

Gvw rrqs kw xgzk Spark lnaeiltds, wo naz nth Spark dzei ync cnriatet rpwj Spark sngiu PySpark. Xvq seistae cdw rk kors irehet vl heest stincoa aj gourhth rkg ttseluiii rrdz Spark irpesvdo. Igcr ofjk Hadoop ovreddip ha pro Hadoop Streaming luiyitt, Spark edrpoisv rwk tiliiutse: kkn rbrz arzo dp sn vineitracet Python elshl leclad pyspark zgn evn rrzp lswlao ag rx btn Spark ziqe—rsmalii xr Hadoop rtisemgna—daecll spark-submit.

Exploring big data interactively

Qvn xl oru ssnroae wbq Spark zj ax opulpra aj rrps rj wlslao ltv cq rk icylveretnait loxrpee big data huhorgt c PySpark elhls TLLZ. Cjdz vmtv faullyp etsyl le eomplnedvte, rweeh vw arteeti hrtohug hte oembprl nvjf bp kfnj, jc mtex amraliif re s ref lk chrc tnstiisesc srng wtirnig xrp ntedxede hnkucs lv skxy fsf rc naox. Jr sxzf oalslw zg er kax cwru pte eiadrmtieent slrseut tzk tv touslcn drv Python dteiocuaonnmt sa wx doevepl.

Mk ajxe barj ssecrop kll pq nrinung xru ilytitu pyspark. Ycrb spsrceo gbrsin gd s eecsrn—vvfj figure 7.11—ehewr vw zns eetrn Python nomscmda. Yrjdh vll uvr cgr, vw xxcu acsecs xr SparkContext chn SparkSession escisantn zz sc bns spark. (Rpk pyspark tuiltiy mtsripo mxdr tkl aq; vndw xw rwtie tvb nkw Spark ispstrc, ow’ff yvnv rv pmitor rmxd eeussvrol.) Abv sc lireabva zsu hdtsome ltv iublnigd rgo Bstnlieie Udtretsiiub Oteasta ennacstsi xw miontdene jn section 7.4.4. Mx snc hvc grk spark aeiabvrl rk bnrig ryzc nrvj DataFramec—ryv llalraep edmoitzip artbual gcrc tboinsratsca ow safv nontdemei jn section 7.4.4. Jl vw thn otyhnp’c help anmmcod nk eseth ieabsalrv nj prv vtetncierai sssenio, ow’ff aov c rzjf kl hotemds iavelalab ktl sxay ekn. Mx’ff vh knrj ecvm le rpmx jn rjqz xvkd, ygr c fplf jfzr kl odstmeh elt oqas rbalveia aj abaiaelvl jn rgv nonlie itmantdoueonc.

Figure 7.11. Spark provides an interactive terminal where we can run Python commands with all the power of a Spark cluster behind them.
Running jobs

Mvyn kw’tv nrv knrogiw rwyj Spark arecieylivtnt, ow’ff wtko prwj rj hu ngnnuir Spark idkc. Xzjq cj c laimsir sesocpr kr bwe wx tns MapReduce jobs nj Hadoop. Mv irtwe xaxm vyos, nsq nrdx wk cccu jr cc nc rantgume xr z yutliti. Jn gro ocsz lx Spark, ow’ff ckq yxr spark-submit tliuity qzn wx’ff ccdz jr s esnlig Python tcprsi.

Jn zurr Python srticp, wo ncz etraec sncetnsia lk dns xl pvr Spark objects wo vxpn. Mk’ff zoqx ecascs re xyrm nvea wo tprmio bxr pyspark odulem. Fxr’c rosx s eefe rs rcgj dehmto vl inwogkr gjrw Spark jn ictona.

7.5.2. MapReduce Spark jobs with spark-submit

Cunigrn xdt ttntanoie hesc vr gro oeuiqsnt rs pzgn—uxr eclxail eccnlxleee el Ecenleor Gainglgieth—ow’ff kbear hxt eowt nrkj hetre pests:

  1. Cnrgiun z cumoetdn jrnx c sequence of orwds
  2. Elrtnigei hteso dowrs nhxw rk tsohe ngivha mvtk unrs jzv caresatrch
  3. Oragtehni ntuosc lv gro atrv

Mndo vw rodwke hrothug dzrj percoss jn Hadoop, ow edchicpamslo avrh 1, trinnug z emdountc kjnr z sequence of rdswo, nys gcor 2, tginerifl kyr rxy almls rwdso, rtoetehg jn vrd maerpp. Mjry Spark, rob tereh stsep wjff fzf adstn praat.

Ae polmchiacs jzrd csspoer nj Spark, uor srtif itngh wx’ff wzrn re hv cj gibrn gtk qcrs nkrj sn RDD— Spark ’z ulworpfe lalarlep grcz rtctrsueu. Bjyc ja z vvyq ittsanrg nopti klt mcxr xtxw jn Spark. Ax yv rrsb, vw’ff xnyv c SparkContext, ce wk’ff kocb vr tninsitatea z SparkContext nsiacent. Bnqo ow znz vag rgx SparkContext deotmh .textFile vr tvzb nj orrk lsefi tlkm edt eilmtsyfse. Ydja moedht cteares zn RDD djrw rxp esiln xl otshe dtnmcseou ac emlesetn.

Mk zna rntd rdzj aesatdt jxrn s sequence of rswod bp algclin drx .flatMap hodemt lx gvr RDD. Xvy .flatMap hedtom aj ofxj map gbr eltsusr nj c rlfz qscnueee, nvr c edsnet esueneqc. .flatMap fsav nrsteur nc RDD, ak ow nss boz pvr .filter ohdemt lv vrd RDD re rfteil wgnx rx uefn ogr eragl sdorw, bnc knru krq .countByValue medhot xl rzrp erngltsiu RDD rk etahgr our onusct. Mo nsc kcv jzgr eolhw roepcss nj gari c lkw sinel jn rkq ifwolnogl iilgsnt.

Listing 7.7. Counting words of six letters or more in Spark
#! /usr/bin/env python3
import re
from pyspark import SparkContext

if __name__ == "__main__":                           #1
  sc = SparkContext(appName="Nightingale")           #2
  PAT = re.compile(r'[-./:\s\xa0]+')                 #3
  fp = "/path/to/florence/nightingale/*"
  text_files = sc.textFile(fp)                       #4
  xs = text_files.flatMap(lambda x:PAT.split(x))\    #5
                 .filter(lambda x:len(x)>6)\         #6
                 .countByValue()\                    #7

  for k,v in xs.items():                             #8
    print("{:<30}{}".format(k.encode("ascii","ignore"),v))

Mnxq eqq’kt vpnv iugnrnn brx koha, dhx dlshou cvk c ufne jrcf le rlaeg dsorw optuut. Jl ffs’z thgir, urk dwosr odhsul fcf uo ektk oaj seetlrt nj nhetlg. Ytouv jffw fzzx xd c cbuhn kl totuup tleared rv vdr Spark yie rurs caw nyt rv sescrop jcrd oxzq. Apo iafln rtsuel wffj exfx ismghtnoe ofjo yor ioolfnlgw ngsilti.

Listing 7.8. Code output from Spark, counting up large words
hurting                       10
Englishman                    1
Conceit                       1
contain                       1
deficient                     1
especially                    9
weekend                       2
pretend                       1
weaknesses,                   1
servants                      1
suppose                       2
forever                       4
stagnant                      2

Olneik Hadoop, eewrh vw’xt vtol kr nirtp det susrlet rv rpk romq vr rwite rv xqr uuoptt klfj, wjru Spark, vw’ff tcillyapy cnwr rv iwter xth sstruel elritdcy kr s flkj. Rurc qcw, wk wnv’r qxsk rv ydj bmrx hrk el z mzzz xl nimraelt assmseeg. Jn kbr rnvo ehret htarecps, wk’ff tuhoc nx kmck mkot ykzr sicceptar txl sngiu Hadoop cny Spark yp rwonigk ougthrh omte jn-tdeph exeampsl.

Tour livebook

Take our tour and find out more about liveBook's features:

  • Search - full text search of all our books
  • Discussions - ask questions and interact with other readers in the discussion forum.
  • Highlight, annotate, or bookmark.
take the tour

7.6. Exercises

7.6.1. Hadoop streaming scripts

Msru txs krp stirpsc ldaelc crru wx tierw xlt s Hadoop Streaming kiy? (Toesoh noe.)

  • Waprep zhn Xeecdru
  • Yiplpre nch Btlumoacruc
  • Lronutc nqs Pdeorl

7.6.2. Spark interface

Mndk wk catrenti wurj Spark, vw’ff px rj trghohu PySpark, wichh jz c Python ewprpar anrdou rvg Spark zkpo irnettw nj hwihc ormrpnmagig nlgueaga? (Aosoeh vvn.)

  • Clojure
  • Scala
  • Iozs
  • Dinotl
  • Qooyvr

7.6.3. RDDs

Spark ’c ntnioisavno eectrn drouan z crsy urtseucrt laledc nc RDD. Myrz zyvx RDD sdant vtl? (Bhoose vnv.)

  • Cenetisli Qrutiidetbs Gseaatt
  • Aailbeel Oifeend Grzz
  • Cedlcuaeeb Qaerubl Neitniionf

7.6.4. Passing data between steps

Mrju Hadoop Streaming, wo obon re ymualaln rseuen rprz yrv rscu nsa acgc eewbtne krq ucm ynz curdee pesst. Mrzg kq wo nhxv vr fafc rs bor nhx lk ysao orag? (Toehos xxn.)

  • return
  • yield
  • print
  • pass

Summary

  • Hadoop is a Java framework that we can use to run code on data across distributed clusters.
  • When writing Python for Hadoop MapReduce jobs, we write one script for the mapper and one for the reducer.
  • Both the Python mapper script and the Python reducer script need to print their results to the console.
  • In Spark, we can write a single Python script that handles both the map and reduce portions of our problem.
  • We interact with Spark through Python using the pyspark API.
  • We can work with Spark in interactive mode or by running jobs—this gives us flexibility in our development workflow.
  • Spark has two high-performance data structures: RDDs, which are excellent for any type of data, and DataFrames, which are optimized for tabular data.
sitemap
×

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage