使用復(fù)用光場相機(jī)的單鏡頭視圖合成 Single-Shot View Synthesis using a Multiplexed Light Field Camera_第1頁
使用復(fù)用光場相機(jī)的單鏡頭視圖合成 Single-Shot View Synthesis using a Multiplexed Light Field Camera_第2頁
使用復(fù)用光場相機(jī)的單鏡頭視圖合成 Single-Shot View Synthesis using a Multiplexed Light Field Camera_第3頁
使用復(fù)用光場相機(jī)的單鏡頭視圖合成 Single-Shot View Synthesis using a Multiplexed Light Field Camera_第4頁
使用復(fù)用光場相機(jī)的單鏡頭視圖合成 Single-Shot View Synthesis using a Multiplexed Light Field Camera_第5頁
已閱讀5頁,還剩49頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

Single-ShotViewSynthesisusingaMultiplexedLightField

Camera

ShamusLi

ElectricalEngineeringandComputerSciencesUniversityofCalifornia,Berkeley

TechnicalReportNo.UCB/EECS-2024-192

/Pubs/TechRpts/2024/EECS-2024-192.html

November13,2024

Copyright?2024,bytheauthor(s).

Allrightsreserved.

Permissiontomakedigitalorhardcopiesofallorpartofthisworkfor

personalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesare

notmadeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationonthefirstpage.Tocopyotherwise,torepublish,topostonserversortoredistributetolists,requirespriorspecificpermission.

Single-ShotViewSynthesisusingaMultiplexedLightFieldCamera

byShamusLi

ResearchProject

SubmittedtotheDepartmentofElectricalEngineeringandComputerSciences,UniversityofCaliforniaatBerkeley,inpartialsatisfactionoftherequirementsforthedegreeofMasterofScience,PlanII.

ApprovalfortheReportandComprehensiveExamination:

Committee

ProfessorLauraWallerResearchAdvisor

11/13/24

(Date)

*******

ProfessorRenNgSecondReader

11/13/2024

(Date)

Single-ShotViewSynthesisusingaMultiplexedLightFieldCamera

by

ShamusLi

Athesissubmittedinpartialsatisfactionofthe

requirementsforthedegreeof

MasterofScience

in

ElectricalEngineeringandComputerSciences

inthe

GraduateDivision

ofthe

UniversityofCalifornia,Berkeley

Committeeincharge:

ProfessorLauraWaller,Chair

ProfessorRenNg

Spring2024

Single-ShotViewSynthesisusingaMultiplexedLightFieldCamera

Copyright2024

by

ShamusLi

1

Abstract

Single-ShotViewSynthesisusingaMultiplexedLightFieldCamera

by

ShamusLi

MasterofScienceinElectricalEngineeringandComputerSciences

UniversityofCalifornia,Berkeley

ProfessorLauraWaller,Chair

Recentadvancementsinimagingtechnologieshaveshiftedfromtraditional2Dimagecapturetomoresophisticatedmethodsthataimtocaptureadditionaldimensions—spatial,tempo-ral,etc.—ofagivenscene.Wepresentanapproachtosingle-shotviewsynthesisusingamultiplexedlight?eldcamera,wheresub-imagesaredesignedtooverlapwitheachothertoachievehigherspatialandtemporalresolutioncomparedtoconventionallight?eldimaging.Weuseasinglecapturefromouropticalsystemtoachievenovelviewsynthesis.

Oursystemcaptureslight?eldsthroughalensarraythatintentionallyoverlapsviews,en-hancingbothresolutionanddepthof?eld.Thismultiplexingapproachiscomplementedbyacalibrationprocessthatalignsvirtualcameraposes,facilitatingaccuratereconstructionwithoutrepeatedposeestimation.WemodifytheforwardmodelofGaussianSplattingtoimplicitlyrepresentandreconstructthelight?eldfromthemultiplexedmeasurements.

Wepresentsyntheticexperimentalresultsthatdemonstratethee?cacyofoursystemingeneratingwide-angle,photorealistic3Dreconstructionsofsmallscenesbothinsimulationandtherealworld,anddiscussextensionstoaphysicalsystem.Weachieveanoptical?eldofviewofmorethan70degrees,andareabletoaccuratelyreconstructmorethan120degreeswithasingleshot.Ourphysicalsystemachieves1.9rays/pixelofmultiplexing,a90%increaseinpixelinformationoveralight?eldimagingsystemwithnooverlapping,andwedemonstratehigher-qualityreconstructionsonsyntheticsceneswithupto2.5rays/pixelofmultiplexingwhencomparedtobothtraditionallight?eldimagesaswellasmonocu-larGaussianSplatting.Ourmethodrepresentsapotentialstepforwardinthepracticalapplicationofviewsynthesis,particularlyindynamicenvironmentswithfewcameras.

i

Tomyfamily.

ii

Contents

Contentsii

ListofFiguresiii

1Introduction1

1.1RelatedWork 3

2BuildingaMultiplexedLightFieldCamera7

2.1OpticalDesign 7

2.2Methods 9

3NovelViewSynthesisforMultiplexing12

3.1CameraCalibration 12

3.2GaussianSplattingOptimization 15

4ExperimentalResults17

4.1SimulationExperiments 17

4.2Real-WorldExperiments 20

5Conclusion23

Bibliography25

iii

ListofFigures

1.1Wepresentanimagingsystemforsingle-shotviewsynthesisusingamultiplexedlight?eldcamera.Thecapturedimageonthesensorconsistsofmultipleover-lappingviews.Thecaptureddataisthenprocessedthroughourviewsynthesispipelinetogeneratenovelviewsofthescene.Thesystemiscalibratedbycaptur-ingimagesthroughindividuallenslets,allowingestimationofcameraposesusing

structure-from-motiontechniques

2

2.1Exampleofcapturedimageswithopticalcrosstalk.Opticalcrosstalkoccurs

whenlightintendedforonesectionofthesensorinadvertentlyreachesthearea

designatedforanotherlens,causingundesiredimageartifacts

8

2.2Theaperturearraymitigatesopticalcrosstalkbyblockingstraylightbetweenlenslets;increasesthedepthof?eldbylimitingtheefectiveaperturesizeforeach

sub-lens;andcontrolstheamountofoverlap.Whileitdoesblockasigni?cant

amountoflight,atthemesoscalethisisnotanissue

9

2.3Eachlensletinthearrayfunctionsasanindividualcamera,capturingaslightly

diferent,overlappingperspectiveofthescene.Thissetupisanalogoustoan

arrayofcamerasthatcollectivelycaptureacomprehensivelight?eld

11

3.1COLMAPreconstructionresultshowingtheestimatedcameraposesandsparse3Dpointcloudfromcalibrationimages.Thecalibrationimagesshownarea

subsetofthe42imagesused.Thesparsepointcloudindicatestherough3D

structureofthescene

13

3.2(a-b)COLMAPfailuremodes.Duetothesymmetryofthisobject,thereexists

someambiguityinthelocationofthecameraviews,leadingtopointcloudsthat

are?attenedorcompressed.(c)showsasuccessfulreconstruction

14

4.1Performancecomparisonbetweensingle-lensandmultilenscamerasinsimulation

ontheLegoscene.Themultilenscameraconsistentlyoutperformsthesingle-lens

camera,achievinghigherPSNRvalues,particularlyaround2.0raysperpixel..18

iv

4.2Syntheticreconstructionresultswithdiferentamountsofmultiplexing:(a)and(b)showtherawcompositeimageandthereconstructionresultat1.5raysperpixel,respectively.(c)and(d)showtherawcompositeimageandthereconstruc-tionresultat2.0raysperpixel.Theimagesdemonstratethathigherlevelsof

multiplexingleadtoincreasedartifactsinthereconstructedscenes........19

4.3(a)Rawmultiplexedimagecapturedbyourlight?eldcamerasystem.Theimageshowsmultipleoverlappingviewsofthescene,eachslightlyshiftedinperspective.

(b-c)GaussianSplattingreconstructionresultsat0degreesand60degreesfromtheopticalaxis,respectively.(d)VolumetricvisualizationoftheGaussiansat

fullopacityand10%size...............................21

4.4Testviewrenderingsofthereal-worldreconstructionwithmultiplexing:(a)Re-sultswithhighmultiplexing,showingsomesmearingduetooverlappingperspec-tives.(b)Doublerenderingwithlessmultiplexing,indicatingmultipleobjectinstances.(c)Out-of-viewrendering,wherepartsofthesceneappearoutsidethe

expected?eldofview.................................22

v

Acknowledgments

Iwould?rstliketothankKristinaMonakhovaandKyrollosYannyforhelpingacuriousfreshmandiscoverthe?eldofcomputationalimagingforthe?rsttime.ItwasthoseweeklymeetingswhileIwasstuckinmyroomthathelpedmedecidethatIwantedtopursueanadvanceddegree.IwouldliketothankthewholeofWallerLabforsharinglivelydiscussionswithmeandoferingmeyourwisdomabouteverythingfromresearchtotheoutdoors.Thisworkwouldn’thavebeenpossiblewithoutsupportfromSaraFridovich-Keil,RuimingCao,andKevinZhou,whoseexpertisewasinstrumentalforachievingmyresearchgoals.WheneverIfeltlost,askingthemhasoftenbeentherightanswer.Inaddition,someoftheworkpresentedherewasdonejointlywithViTran,whoisafantasticpersontoworkalongside,andtherigouroftheirresearchismuchappreciated.IwouldliketothankProfessorRenNgforhisfeedbackonthisreportandforbeinganinspiration.Lastly,IwouldliketothankmyadvisorProfessorLauraWallerforherguidancebothinshapingmyexperimentsandinnavigatingacareerinacademia.IamextremelygratefultohavehadsuchgreatmentorshipthroughoutmytimeatBerkeley.

1

Chapter1

Introduction

Theevolutionofimagingtechnologies,fromtraditional?lm-basedcamerastomoderndigitalsensors,havebroughtaboutsigni?cantadvancesinhowwecaptureandinterprettheworldaroundus.Conventionally,camerashavebeendesignedtocapturetwo-dimensionalimages,focusingontheproductionofsharp,well-exposedphotographsthatrepresentasingleper-spectiveofascene.However,thedimensionalityoflightextendsfarbeyondthecon?nesof2Dimageplanes.Lightinteractingwiththeenvironmentcarriesinformationnotonlyaboutintensity,butalsoaboutdirection,wavelength,andtime.Aparameterizationofthisistheplenopticfunction—P(θ,φ,λ,t,Vx,Vy,Vz),whereθandφisthedirectionoflight,λisthewavelength,tistime,andVx,Vy,Vzis3Doriginofthelightray—whichrepresentseverypossibleimagefromeveryviewpointinaparticularspace-timechunk[2].Itisthereforenec-essarytomapthishigher-dimensiontoa2Dgridtocapturethislostinformation,leadingtosacri?ceseitherinspatialortemporalresolution.Theprimarypurposeofthisworkistodesignanopticalcodingtolimitthesetradeofsasmuchaspossible.

Thefocusofmyworkisonrenderingimagesfrommoreviewpointsthanwereactuallycaptured,atechniquecallednovelviewsynthesis.Thisisachievedbynotonlycapturingthe2Dintensityoflightthathitseachpixel,butalsomeasuringtheamountoflighttravellingalongeachraythatintersectsthesensor.Wecanmodelthisrayin5Dbyremovingtimeandwavelengthfromtheplenopticfunction,orin4Dasaparameterizationofalinethatintersectstwoplanes[15].

Traditionally,thisrepresentation,knownasalight?eld,wasexplicitlyrepresentedandrequiredadensegridofviewstobecaptured.RecenttechniquessuchasNeuralRadianceFields(NeRF)haverevolutionizedthe?eldbylearningimplicitscenerepresentationsthatenablehigh-qualityimagesynthesisfromnovelviewpoints[14].NeRFanditsderivativescanreconstructa3Dscenefromarelativelysparsesetofinputimagescapturedfromdiferentviewpoints.However,thecaptureoftheseviewstypicallytakesalongtimeandassumesastaticscene,heavilylimitingtheirapplicabilityindynamicenvironmentsthatchangeovertime.

Light?eldcameras,whichsimultaneouslyrecordmultipleperspectivesinonesensormeasurement,oferapotentialsolutiontothisproblem.Bycapturingbothspatialand

CHAPTER1.INTRODUCTION2

Figure1.1:Wepresentanimagingsystemforsingle-shotviewsynthesisusingamultiplexedlight?eldcamera.Thecapturedimageonthesensorconsistsofmultipleoverlappingviews.Thecaptureddataisthenprocessedthroughourviewsynthesispipelinetogeneratenovelviewsofthescene.Thesystemiscalibratedbycapturingimagesthroughindividuallenslets,allowingestimationofcameraposesusingstructure-from-motiontechniques.

angularinformationoflightrays,light?eldcamerasenablepost-capturerefocusing,depthestimation,andviewsynthesis.However,traditionallight?eldcameras—fromcameraarraystoplenopticcameras—faceafundamentaltrade-ofbetweenspatialresolutionandangularresolution.Capturingmoreangularinformationtypicallyresultsinadecreaseinspatialresolutionandviceversa.

Thisworkintroducesanovelapproachtosingle-shotviewsynthesisusingamultiplexedlight?eldcamera.Byintentionallyoverlappingtheviewscapturedbyalensarray,itispossibletoachieveahigherspace-bandwidthproductthanwouldbepossiblewithnon-overlappingmonocularviews.Thisisidealforhighlydynamicscenesinthemesoscale,makingthesystemlimitedonlybythecapabilitiesofthecamerasensor.Inaddition,by?xingtheoptics,weonlyneedtocalibratethecameraparametersoncepercamera,skippingacostlyandpotentiallyinaccurateposeestimationstepinfuturereconstructions.WemodifyGaussianSplattingtohandletrainingfromasinglemultiplexedimagesuchthatinsteadofrenderingoneimageforeachtrainingpass,werenderoneimagefromeachviewpointinthecameraandcombinethemtocreatethemultiplexedimage.Wecalibrateourcamerausingatraditionalstructure-from-motionpipeline.Wedemonstratethee?cacyofoursystemthroughbothsimulationandreal-worldexperiments.Weachieveanoptical?eldofviewofmorethan70degrees,andareabletoaccuratelyreconstructmore120degreeswithasingleshot.Ourphysicalsystemachieves1.9rays/pixelofmultiplexing,a90%increaseinpixelinformationoveralight?eldimagingsystemwithnooverlapping,andwedemonstrate

CHAPTER1.INTRODUCTION3

higher-qualityreconstructionsonsyntheticsceneswithupto2.5rays/pixelofmultiplexingwhencomparedtobothtraditionallight?eldimagesaswellasmonocularGaussianSplatting.

1.1RelatedWork

LightFieldImaging

Light?eldcameraspassivelycapture4Dspace-angleinformationinasingleshot,enabling3Dreconstructions,amongstotherapplications.Light?eldimaginghasbeenacrucialre-searchareaincomputationalphotographyandcomputervision,focusingoncapturingthefulldimensionalityoflightraysinascene.Theplenopticfunction,introducedbyAdel-sonandBergen,parameterizeslightraysbytheirposition,direction,wavelength,andtime,encapsulatingtheentiretyofvisualinformationavailableinascene[2].Unliketraditionalimagingtechniquesthatcaptureonlytheintensityoflightateachpoint,light?eldimag-ingcapturestheintensityoflightraysasafunctionofspaceandangle.Thisadditionalinformationenablescomputationalcapabilitiesnotpossiblewithconventionalcameras.

Thecorecomponentofalight?eldcameraisamicrolensarrayplacedinfrontoftheimagesensor.Eachmicrolenscaptureslightraysfromdiferentdirectionsandfocusesthemontothesensor,allowingeachpixeltoreceivelightinformationfromaspeci?cdirection.Thecapturedlight?elddatacanberepresentedasafour-dimensionalfunction,L(u,v,s,t),where(u,v)denotespatialcoordinatesand(s,t)representangularcoordinatesofthelightrays.Light?eldcamerascanbemodeledasanarrayofcameras,eachcapturingaslightlydiferentperspectiveofthescene.Consequently,thecaptureddatacomprisesaseriesofsub-images,eachrepresentingaslightlydiferentviewpoint.Thismulti-viewdataenablesrefocusinganddepthof?eldchanges,disparityanddepthcalculation,aswellas3Dreconstruction[8].

Implementationsoflight?eldcameras,suchastheplenopticcameraproposedbyAdelsonandWang[1]andnotablybyNgetal[15],useamicrolensarrayplacedinfrontofanimagesensortocapturemultipleviewsofascenefromslightlydiferentperspectivesinasingleshot.Analternativelight?eldcameradesignisacameraarray,whichallowsforfornewviewpointstobegeneratedbyinterpolatingbetweencapturedimages[27].Theseworksdemonstratedtheconceptofinterpreting2Dimagesasslicesofa4Dlight?eldfunction,facilitatinge?cientcreationanddisplayofnewviewswithoutrequiringdepthinformationorfeaturematching.

However,traditionallight?eldcamerasfacesigni?canttrade-ofsbetweenspatialandangularresolutions.Capturingmoreangularinformationtypicallyresultsinadecreaseinspatialresolutionandviceversa.Thistrade-oflimitstheapplicabilityoftraditionallight?eldcamerasinscenariosrequiringhigh-resolutionimagingandwide?eldsofview.Subse-quentworkshaveaimedtoimprovethespatialandangularresolutiontrade-ofsinherentinthesesystems.GeorgievandIntwalaproposedasystemusingahexagonalarrayoftwentylargerlensletsinordertoreducegapsbetweenlenslets[5];LumsdaineandGeorgievintro-ducedtheconceptofthefocusedplenopticcamera,whichimprovesthespatialresolution

CHAPTER1.INTRODUCTION4

bysimplyadjustingtheplacementofthemicrolensarrayrelativetothesensor[10];andPerwa?andWietzkepresenteda3Dcamerawhichachievesimproveddepthestimationwithamulti-focalmicrolensarray.Whilethesemethodsimproveupontraditionaldesigns,theydonotfullyovercometheinherenttrade-ofs.

Lensletarray-basedcaptureschemeshavealsobeenwidelyusedinmicroscopyfor3Ddepthimaging[17,22].Inparticular,FourierLightFieldMicroscopy(FLFM)hasemergedasapowerfultechniqueincomputationalmicroscopy.FLFMoperatesbyplacingamicrolensarrayattheFourierplaneoftheimagingsystem,whichcreatesathree-dimensionalshift-invariantpointspreadfunction(PSF),enablingthereconstructionofvolumetricinformationfromasingletwo-dimensional(2D)measurement.[6].Thisapproachhasbeenfurtherre?nedwithtechniqueslikeFourierDifuserscope,whichintroducesadifuserattheFourierplanetoencodeadditionalspatialinformationandimprovereconstructionquality[9].OurworkdrawsinspirationfromFLFMbutextendstheconcepttomesoscaleimagingofobjectsinthemillimetertocentimeterrange.

Themainideaoflight?eldimagingistoencodeadditionalangularinformationintothecaptureddata,whichcanthenenablesyntheticrefocusing,volumereconstruction,orneuralreconstructionfromasinglesensormeasurement.WeintroduceanopticalsystemphysicallysimilartothatproposedbyGeorgievandIntwala,butwithakeydiference:oursystemisdesignedtointentionallyoverlaptheimagesfromeachlensletontothesensor,anewidea.Byoverlappingtheviewscapturedbythelensarray,weefectivelyincreasetheamountofinformation—space-bandwithproduct—capturedwithoutsacri?cingspatialresolution,enablinghigher-resolutionandwider?eld-of-viewimaging.

NovelViewSynthesis

Novelviewsynthesisrequiresrecoveryofa3Drepresentationofanobjectorscenefrom2Dinputimages.Existingmethodsoftenutilizepointclouds[12],voxelgrids[13],orsigneddistancefunctions[16]torepresentthetarget.Theseapproachestypicallyrequirealargesetoftrainingimagesandcorrespondingcameraposeestimatestoachieveaccurateresults.Practicalapplicationsofhigh-quality3Dreconstructionsincludegenerating3Dmodelsforassetsinanimation,creatingtrainingenvironmentsforroboticssimulations,andenhancingbiologicalanalysis.

NeuralRadianceFields(NeRFs)haveemergedasapowerfultechniquefornovelviewsynthesis[14].NeRFsmodelappearanceandgeometryusingradiance?eldsthatmapspatialcoordinatesandviewdirectiontodensityandcolorvalues.Thisapproachusesadensesetofimagestotrainthenetwork,whichlearnstopredictthecoloranddensityofpointsin3Dspace,allowingforhigh-qualityviewsynthesisfromnovelviewpoints.Researchhasdemonstratedthatasmallmulti-layerperceptron(MLP)withpositionally-encodedinputcoordinatescanaccuratelyrepresentatargetscene[23].Throughstandardvolumerenderingprocedures,rayscanbesampled,evaluated,andconvertedtoimagepixels,withthemodeloptimizingthemeansquarederrorbetweentheoutputtedRGBvaluesandthetrainingimages.Theradiance?eldcanberenderedasimages,depthmaps,orconvertedtoamesh

CHAPTER1.INTRODUCTION5

fordownstreamapplications.NeRFshavedemonstratedimpressiveresultsincapturing?nedetailsandcomplexlightingefects,buttheyassumeastationaryandunchangingtargetsceneacrossalltrainingimages,relyonaccuratecameraposeestimatesfromstructure-from-motionalgorithmslikeCOLMAP[20],andareslowandcomputationallyexpensivetotrain,takinghoursforasinglescene[14].

Signi?cantoptimizationshaveimprovedthee?ciencyofNeRF-basedmethods.Forin-stance,techniqueshavedramaticallyincreasedtrainingspeed[24],andsomeapproaches,suchasPlenoxels,enablefastertrainingwithoutneuralnetworks[19].PixelNeRFandsimi-larworkssuggestthattrainingwithafewinputimagesmightbefeasible[28,25].However,thesefew-imageinputmethodsgenerallyinferthemissingviewsinthescene.Oursystemcapturesalargerareaofthesceneandencodesitintoasingleimage,ensuringthetrainingimagesmoreaccuratelyrepresentthesceneandallowingforreal-timedatacapture.

SeveralextensionsandimprovementstoNeRFhavebeenproposedtoaddressitslimi-tations.D-NeRFadaptsNeRFfordynamicscenesbyincorporatingtemporalinformation,allowingforthesynthesisofscenesthatchangeovertime[18].MonoNeRFattemptstogeneralizeNeRFtomonocularvideos,enablingviewsynthesiswithoutprecisecameraposes[4].However,thesemethodsstillfacechallengesintermsoftrainingtimeandcomputationalresources.

AnalternativeapproachtoviewsynthesisisGaussianSplatting,whichleveragesthein-herentsparsityin3Dscenesbyrepresentingscenesusing3DGaussianfunctions-”Gaussians”-optimizedforposition,orientation,size,andcolor[7].Thismethodcanrenderhigh-qualityimagesinrealtimewhilepreservingimagereconstructionquality,makingitastate-of-the-arttechniquefornovelviewsynthesis.

WeadaptGaussianSplattingtohandlemultiplexedimagescapturedbyourlight?eldcamera.Becauseourimageshaveahigherspace-bandwidthproductthantraditionalmonoc-ularviews,weareabletocreateahigher-?delityreconstructionthanexistingmethods.Theoverarchinggoalistoachieveawider?eldofviewwithourcamerausingintentionallymul-tiplexeddata,enablinge?cientandaccuratereconstructionofphotorealisticvolumesfromasinglecapturewithoutneedingtopredictorgeneraladditionalviewsinthetrainingdata.

CompressedSensing

Compressedsensingisanimagingtechniquethatenablessignalstobeacquiredwithfewermeasurementsbyexploitingtheunderlyingstructureofthesignalforhigh-qualityrecon-struction[3].Typically,capturingasignalrequiresmeasurementsattwicethemaximumspatialfrequencyofthesignal—aspertheShannon-Nyquistsamplingtheorem—toensureallinformationiscaptured.However,signalsareoftencompressible,andthesumofmorein-formationcanbecapturedwithasinglesensorpixelbyspreadingoutthesparseinformationcontainedinthesignalthroughmultiplexing,efectivelyresultinginmoreusefulinforma-tion.Oneofthekeybene?tsofcompressedsensingisitsabilitytosigni?cantlyreduceacquisitiontimeanddatastoragerequirements,whichisparticularlyusefulforhigh-speedorhigh-resolution3Dimagingapplications.

CHAPTER1.INTRODUCTION6

Compressedsensingisparticularlyefectivewhensignalsexhibitsparsityinsomedomain.Thisishighlyrelevanttocomputationalimagingapplications,manyofwhichaimtorecon-structahigh-dimensionalscenefromalimitednumberofmeasurements.Thecompressedsensingparadigmrepresentsapowerfultoolinimagingsystemdesign,wherethesensinghardwareisviewedasanencoderratherthanadirectsignalapproximator.Thisconcepthasalreadymadeasigni?cantimpactin?eldssuchasMRIandcomputedtomography,acceleratingscanspeedsbyreducingthenumberofsamplesrequired[11].Incompressedsensing,thesensingprocessinvolvescapturingmultiplexedmeasurements,whicharelinearcombinationsofthesignal’scomponents.Thesemeasurementsarethenprocessedusingal-gorithmsthatexploitthesparsityofthesignaltoreconstructtheoriginalhigh-dimensionaldata.Thisapproachcontrastswithtraditionalmethodsthatdirectlysampleeachcomponentofthesignalindividually.Byencodingmultipledimensionsoftheopticalimage,compressedsensingenablestherecoveryofdetailedsceneinformationfromfewermeasurements.Inthecontextofopticaldesign,thisraisesthequestionofhowtodesignopticsthatencodeadditionaldimensionsofopticalimagessuchthatsparserecoverycansuccessfullyandaccu-ratelyreconstructtheimage.Speci?cally,inthiswork,weexplorehowopticaldesigncanbeleveragedtoextractlargerspace-bandwidthproductlight?eldsfromasinglemeasurement.

Ourworkemployscompressedsensinginconjunctionwithmultiplexedlight?eldimag-ingtoenhancethecapabilitiesoftraditionalimagingsystems.Byintegratingintentionaloverlappingviewsintotheopticaldesign,wecanencodemoresceneinformationintoeachcapturedimage.Whencomparedtoexistinglight?eldcameras,ourapproachachievesahigherspace-bandwidthproductwiththesamenumberofmeasurements.Whencomparedtoexistingnovelv

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論