text
stringlengths 16
1.15M
| label
int64 0
10
|
---|---|
generic macroscopic traffic node model general road junctions via dynamic system approach jul matthew wright roberto horowitz july abstract paper addresses open problem traffic modeling macroscopic node problem macroscopic traffic model contrast model allows variation driving behavior across subpopulations vehicles flow models thus descriptive used model variable mixtures traffic like traffic traffic etc much complex node problem particularly complex problem requires resolution discontinuities traffic density mixture characteristics solving throughflows arbitrary numbers input output roads node words arbitrarydimensional riemann problem two conserved quantities propose solution problem making use dynamic system characterization node model problem gives insight intuition dynamics implicit node models use intuition extend dynamic system node model setting also extend generic class node model constraints second order present simple solution algorithm node problem node model immediate applications allowing modeling traffic flows contemporary interest like flows arbitrary road networks introduction macroscopic approximation vehicle traffic proven valuable tool study traffic nonlinear dynamics design methods mitigating controlling undesirable outcomes like congestion macroscopic theory describes dynamics vehicles along roads partial differential equations pdes inspired fluid flow basic macroscopic formulation kinematic wave lwr due describes traffic conservation equation density vehicles time lineal direction along road flow speed total flow often expressed terms flux function flux function long straight road often called fundamental diagram formulation simple nonlinear model capture many characteristics real traffic flows example flux function admit phenomenon accelerating decelerating flows tracing hysteresis loop plane one extension lwr model express richer variety dynamics arz family models models fit generic second order extended arz class traffic models written seen model actually consists two partial differential equations contain first derivatives case overloaded mathematical terminology name second order comes view system one two state variables case equivalently property invariant conserved along trajectories property described characteristic vehicles determines relationship members generic second order model gsom family differentiated choice relationship behavior examples chosen include difference vehicles speed equilibrium speed driver spacing flow portion autonomous vehicles intuitive way describing effect property parameterizes family flow models different flow models different values application macroscopic traffic simulation road networks often modeled directed graphs edges represent individual roads called links junctions links meet called nodes typically flow model links called link model flow model nodes called node development accurate link node models areas much research activity transportation engineering many years paper focuses node models macroscopic models node model resolves discontinuities links determines neumann boundary nodes merges diverges riemann problem becomes multidimensional node model determines state individual link affects affected connected links connected links network result recently recognized specific node model used large role describing congestion dynamics emerge complex large networks see discussions introduction sections introduced novel characterization node models dynamic systems traditional studies node models see usually present node model optimization problem node flows found solving problem algorithmic form explicit set steps performed compute flows across node contrast dynamic system characterization describes flows across node evolving period time application means dynamic system characterization presents dynamics said occur simulation timesteps link pdes dynamic system characterization thought making explicit behavior flows nodes many algorithmic node models shown dynamic system characterization produces solutions algorithm introduced also reduces one introduced special case dynamic system characterization proven useful imparting intuition physical processes time implicit algorithmic node models see discussions referring examples paper develop dynamic system characterization node model use solve general node problem models paper several main contributions first extension dynamic system characterization firstorder node models introduced simple solution algorithm represents completion argument began section reference second contribution extension dynamic system characterization generic models see dynamic system characterization lends intuitive incorporation second pde obvious traditional presentation node models third contribution principal contribution paper parallels first using dynamic system node model derive intuitive algorithm computing node flows flow models general nodes best knowledge represents first proposed generic applicable nodes node model traffic flow remainder paper organized follows section reviews node flow problem dynamic system characterization introduced presents aforementioned solution algorithm contribution one paragraph section reviews link discretization gsom presented produces inputs node model standard flow problem solution section presents extension flow problem case dynamic system characterization gsom family solution algorithm general node problem contributions two three finally section concludes notes open problems note naming see section build generic class node models develop node model given relevant model used called generic second order model might accurate describe paper results genericization generic class node models generic model description likely loses comprehensibility might gain accuracy node model section review general node problem particular node model solution algorithm node model extended node problem section traffic node problem defined junction input links indexed output links indexed define classes sometimes called commodities vehicle indexed node problem takes inputs incoming links demands sic split ratios define portion vehicles class link wish exit link outgoing links supplies gives outputs set flows class denote shorthand directed demand sic nodes generally infinitesimally small storage flow enters node must exit node rest section organized follows section defines node problem optimization problem defined explicit requirements following example set section reviews dynamic system whose executions produce solutions node problem finally section uses dynamic system formulation base develop node model solution algorithm algorithm represents completion argument began generic class node model requirements node problem history begins original formulation macroscopic discretized traffic flow models many developments node model theory since reflect recent results divide node model literature epochs drew literature several node model requirements develop set conditions nodel models call generic class node models gcnm set conditions give excellent starting point discussion mathematical technicalities node models used starting point many subsequent papers following list present variant gcnm requirements used includes modification fifo requirement item partial fifo requirement applicability general numbers input links output links case flow also extends general numbers classes maximization total flow node mathematically may expressed max according means flow actively restricted one constraints otherwise would increase hits node model formulated constrained optimization problem solution automatically satisfy requirement however requirement really means constraints stated correctly overly simplified thus overly restrictive sake convenient problem formulation see literature review examples node models inadvertently maximize node throughput oversimplifying requirements flows mathematically flow conservation total flow entering node must equal total flow exiting node mathematically sic satisfaction demand supply constraints mathematically satisfaction partial fifo constraint single destination given able accept demand flows constrained queue partially defined vehicles builds degree queue restricts flows restriction intervals interval means queue movement block portion lanes leftmost extent rightmost extent movement uses two lanes movement uses right two lanes traditional full fifo behavior queue blocks lanes recovered setting continuing example since lane serves movement right lane blocked queue movement queue lanes help keep meaning clear find helpful read restriction interval onto another item defines partial behavior amount time restriction interval active link relatively high demands link relatively low demands case active greater portion directed demand see effect time captured dynamic system formulation section finally require consider cumulative effect restriction intervals suppose movement active restriction queue movement say another downstream link exhausts supply vehicles begin queueing movement new restriction second queue forms requirement stated mathematically denotes area object denotes cartesian product formulation complex order state optimization constraint consequence queue formation intuition outlined third paragraph item major contribution dynamic system approach node modeling explicit encoding intuitive description see sections section much discussion requirement satisfaction invariance principle flow input link restricted available output supply input link enters congested regime creates queue input link causes demand jump capacity infinitesimal time therefore node model yield solutions invariant replacing flow input link supply restrictions flow given input link arepimposed class components flow proporc tionally demands mathematically sic sic assumes classes mixed isotropically means vehicles attempting take movement queued roughly random order example vehicles commodity queued front vehicles case vehicles would disproportionally affected spillback feel reasonable assumption situations demand node dependent mainly vehicles near end link small cell end addition numbered requirements two elements needed define node model first rule portioning output link supplies among input links following proposed allocate supply incoming flows proportionally input link capacities denote paper allocate supply proportionally links priorities spirit dynamic system view priorities represent relative rate vehicles exit link claim downstream space one reasonable formulation might follow example assumed vehicles exit link rate second necessary element redistribution leftover following initial partitioning supplies one input links fill allocated supply rule must redistribute difference input links may still fill second element meant model selfish behavior drivers take space available ties closely requirement referred two elements collectively supply constraint interaction rule scir discussion choices scirs recent papers see section paper consider scir form sic requirement encoded right part cartesian product requirement appears component cartesian product rectangles appear section however much intuitive understand explicit temporal property appears dynamic system characterization discuss section derivation pij oriented priority distributes input priority proportionally actual vehicles using links priority claim downstream supply set denotes output links restrict flow cthe cconditions membership read nonzero demand movement claims least priorityproportional allocation supply note link construction constraint says link able fill demand least one output link restricts movements claim least much allocation supply constraint captures reallocation leftover supply states link fulfill demand links continue send vehicles links fulfilled demands concludes setup generic node model problem solution flows constrained least one constraints outlined algorithm solve problem proof optimality given node model requirements note list node requirements presented section particular node problem interest remainder paper exhaustive list reasonable node model requirements since statement gcnm requirements several authors proposed extensions modifications partial fifo relaxation beyond covered one discussed nodal supply constraints supply constraints name suggests describe supply limitations node rather one output links meant describe restrictions traffic occur due interference flows junction rather vehicles blocked input link exhaustion shared resource green light time signalized intersection movement node may may consume amount node supply proportional throughflow node supply constraints gcnm framework originally proposed noted node supplies may lead solutions recently revisited node supply constraints mostly context distribution green time address critique solutions proposed generalization objective still enforces drivers take available space explicitly include node supply constraints dynamic system node models resulting solution algorithms paper path towards inclusion cases straightforward notationally cumbersome somewhat beyond paper scope fusing gcnm link models review node dynamic system section reviews node dynamic system characterization node models presented dynamic system hybrid system means contains continuous discrete states also called discrete modes continuous states evolve time according differential equations differential equations change discrete states discrete state transitions activated conditions continuous states satisfied let continuous states xci representing number vehicles class taken movement node continuous state space denoted let set output links let discrete states recall refers power set index representing set downstream links become congested downstream link said become congested time xcij discrete state space denoted init defines set permissible initial states system dom denotes domain discrete state space permissible continuous states discrete state active reset relation defines transitions discrete states conditions transitions hybrid system execution begins time link given time limit sic necessary ensure xci partial fifo constraint active appears dynamic system flow rate attenuation hybrid system init dom init xci xci xcij sij otherwise xci dom xci xci execution complete fijc xcij shown hybrid system produces solutions algorithm following section show quickly compute executions hybrid system since based continuoustime dynamics presents intuitive algorithm one execution node dynamic system simple algorithm evaluating hybrid systems typically involves forward integration differential equation fixed varying step sizes however case evaluation performed much simpler manner due particular dynamics system since dynamics condition discrete mode switching simple time next mode switch occur found closed form equations say mode switch link enters occur xcij say currently time combining find time mode switch occurs denote xcij solving integral plugging xci pij xci xci xci value computed output link smallest first link fill join used output link let min however one input links may time limit expire would also change dynamics stops sending vehicles time therefore evaluation system trajectory beginning done evaluating output link identifying iii checking whether time limits occur simulation necessary determine next event occur equations min evaluated closed form note may change zero nonzero without change discrete state conditional xci broken understood running vehicles able send may happen partial fifo constraint becomes active following algorithm introduce new set present dynamic system definition contains either exhaust supply time limits expire whose become zero without necessarily entering steps summarized algorithm algorithm represents completion argument began review flow modeling introduction formulation gsom seen called advective form form property advected vehicles speed constant along trajectories form makes statement property property vehicles easy understand conceptually however apply discretization useful consider total property rewrite conservative form review relevant discretization using godunov scheme next section deeper analysis physical properties see make one note constraints imposed form stated apply godunov discretization one restricted choices unique every unique every must invertible arguments algorithm node model solution algorithm input sic output algorithm setup initialization begin main loop compute dynamics xci end algorithm integrate forward time end account emptied input links end account filled output links end end return algorithm setup initialization sic sic end sic end end sic end end return algorithm computing time integrate forward case compute filling time every output link end construction section item fulfill demands time end min return godunov discretization gsom godunov discretization lwr model first introduced cell transmission model godunov scheme discretizes conservation law small cells cell constant value conserved quantity fluxes computed solving riemann problems boundary godunov scheme method useful simulating solutions pdes derivatives like lwr formulation ctm riemann problem stated form demand supply functions since also conservation law derivatives godunov scheme applicable well however due second pde intermediate state arises riemann problem solution intermediate state always clear physical meaning lack clarity likely inhibited extension godunov discretization node case following outline discretized flow problem make use physical interpretation intermediate state due final note node model able ignore demand supply functions generated supplies demands sic agnostic method computed input output link densities change evaluation node problem see shortly case flow problem due intermediate state interactions downstream link therefore explanation makes use demand supply functions respectively preliminaries paper say vehicle class property value net averaged vehicle classes property link denoted total density link model fundamental diagram link function net density net property defined carries demand supply functions godunov discretization means supply demand defined link level net quantities input link critical density property value capacity property value demand split among classes movements proportional densities split ratios sic sic oriented priorities computed according computing supply solving output link supply much complicated problem begin discussion review case sections supply output link case input link output link see supply downstream link actually function upstream link vehicles property density speed middle state middle state given otherwise velocity downstream link vehicles velocity function given fundamental diagram intuition behind meaning middle state given follows middle state vehicles actually leaving upstream link entering downstream link leave enter clearly carry property velocity velocity downstream vehicles exit link free space vehicles enter middle density therefore downstream supply determined upstream vehicles characteristics downstream link flow characteristics words number vehicles fit whatever space freed downstream link function drivers willingness pack together defined since meaning supply number vehicles accept means dependent note also equation congestion spills back highly congested low makes large turn leads small reviewed case consider generalize node determine supply several links node model case saw reasoning behind dependence spacing tendencies vehicles determine number vehicles fit therefore generalizing node makes sense define link middle state dependent vehicles actually entering link upstream middle state link say middle state velocity density otherwise supply note defined function recall node model change upstream links exhaust demand downstream links run supply two events correspond discrete state changes hybrid system course carries node model means quantities thus supply change change therefore discrete state transition need determine new supply output link new mixture vehicles entering next discrete state explain done following example suppose time compute time one changes point recompute length recompute middle state variables using critically note recomputation new means also different carry create different takes account vehicles moved difference properties note leads significantly tighter packing smaller spacing conceivable especially much smaller course description assumes isotropic mixing vehicle classes link recall stated assumption input links item gcnm requirements section unlike supply demand need recomputed since assume mixture vehicles demanding movement remains due isotropic mixture assumption summary state generalization gcnm requirements requirements stated section addition constraint enforcing conservation property via modification supply constraint supply computed second pde fundamental diagram using property incoming flows second point supply constraint also dependent flow solution worsens nonconvexity node problem indeed drifting away setting makes sense may helpful understanding consider physical dynamics encoded solution methods case ingredients necessary extend hybrid system node model formulation dynamic system definition state node dynamic system extension one presented symbols remain however make changes let set input links set exhausted input links set introduced algorithm section necessary state recalculations supply according steps section link exhausts demand net property changes paralleling let denote exhausted input link input link said exhausted time xci formula time demand exhaustion remains case sic accommodate recomputing supply using add continuous states quantities denote flow movement class movement since last time supplies recalculated densities output links necessary following new supplies new also take account vehicles xci already made movement determining link filled new supply need fresh counter vehicles entered assume initial hybrid system init dom init xci otherwise xci xci dom xci execution complete fijc xcij unsurprisingly dynamic system complicated one reader note discrete dynamics discussed triggered links filling links emptying filling entering remains system emptying input links rather encoded continuous dynamics done system discrete dynamics possible reduce number discrete states system including continuous dynamics second order system change continuous dynamics changes output links continuous dynamics changes must trigger recomputation change thankfully although system seems much complex system secondorder solution algorithm much complicated solution method system see next section solution algorithm note system second order system constant continuous dynamics discrete state means case easily compute time next discrete state transition occurs like section pthis smallest said input link time limits remain sic time output link runs supply filled discrete state time discrete state switched supply recomputed similar xci differs two key ways first term supply recomputed also accounts numerator subtracted quantity subtraction supply accounted recomputed supply second denominator summed rather set definition dynamic system stated section state solution algorithm dynamic system follows logic case identifying next occur finding constant dynamics system evolve time integrating forward time new step recomputing supply repeating algorithm node model solution algorithm initial input sic output algorithm begin main loop compute dynamics xci end algorithm end algorithm account emptied input links end account filled output links end end return algorithm setup initialization case algorithm ipdo end end return algorithm computing time integrate forward case algorithm end compute filling time every output link end end min return algorithm computation supply else end else end return algorithm recomputing downstream links density property xcij return extension gcnm requirements solving node problem fact supply must continually recalculated interpreted indicating use supply demand quantities natural case see demand supply alone including enough solve node problem case link quantities required unnatural node problem riemann problem resolve discontinuities case node problem often stated terms supply demand instead actual conserved quantity intuitive physical meaning since link densities needed beyond use problem beginning simplifies problem removing one step however seen using framework case simplify problem along lines still need make use therefore future may make sense state node problem taking inputs links rather inputs would remove unintuitive nature needing recompute conclusion paper presented generalization generic class node model macroscopic traffic junction models general second order model flow model paper results allow extension macroscopic modeling flows based different mixtures driving behavior complex general networks many flows networks able modeled microscopic models consider behavioral variability level macroscopic models capture aggregate features granular model greatly increase scale problems able study stated flow models used represent flows great contemporary interest mixtures autonomous vehicles researchers practitioners need use every tool available understand predict changes arise traffic demand changing size characteristics immediate avenues future refinement macroscopic models presented paper mentioned section address node supply constraints paper node models however immediate application general node model macroscopic simulation traffic complex networks particular concern scheduling problems involving green light timing future work incorporate node supply constraints general node problem may used signal optimization potential connected automated vehicles bring traffic control references rascle resurrection second order models traffc flow siam journal applied mathematics corthout viti flows macroscopic intersection models transportation research part methodological mar daganzo cell transmission model dynamic representation highway traffic consistent hydrodynamic theory transportation research part methodological daganzo cell transmission model part network traffic transportation research part methodological fan sun piccoli seibold work collapsed generalized model model accuracy arxiv preprint rohde operational macroscopic modeling complex urban road intersections transportation research part methodological july gentile meschini papola spillback congestion dynamic traffic assignment macroscopic flow model bottlenecks transportation research part methodological jabari node modeling congested urban road networks transportation research part methodological lebacque khoshyaran macroscopic traffic flow models intersection modeling network modeling international symposium transportation traffic theory isttt pages lebacque mammar zhang model vacuum problems existence regularity solutions riemann problem transportation research part methodological lebacque mammar salem generic second order traffic flow modelling transportation traffic theory papers selected presentation pages lighthill whitham kinematic waves flow movement long rivers theory traffic flow long crowded roads proc royal society london part leonard simplified kinematic wave model merge bottleneck applied mathematical modelling richards shock waves highway operations research smits bliemer pel van arem family macroscopic node models transportation research part methodological apr corthout cattrysse immers generic class first order node models dynamic macroscopic simulation traffic flows transportation research part wang work comparing traffic state estimators mixed human automated traffic flows transportation research part emerging technologies may wright horowitz kurzhanskiy dynamic system characterization road network node models proceedings ifac symposium nonlinear control systems volume pages august wright gomes horowitz kurzhanskiy node route choice models highdimensional road networks submitted transportation research part zhang traffic model devoid behavior transportation research part methodological | 3 |
equation inverse rfec sensor model sep raphael falque teresa gamini dissanayake jaime valls miro university technology sydney australia emails paper tackle direct inverse problems rfec technology direct problem sensor model given geometry measurements obtained conversely inverse problem geometry needs estimated given field measurements problems particularly important field testing ndt allow assessing quality structure monitored solve direct problem parametric fashion using least absolute shrinkage selection operation lasso proposed inverse model uses parameters direct model recover thickness using least squares producing optimal solution given direct model study restricted axisymmetric scenario direct inverse models validated using finite element analysis fea environment realistic pipe profiles keywords remote field eddy current rfec direct problem inverse problem non destructive evaluation nde ntroduction remote field eddy current rfec technology allows inspection ferromagnetic pipelines tools based technology usually composed exciter coil one several receivers exciter coil driven lowfrequency alternative current generates electromagnetic field flows outside pipe near exciter coil flows back inward pipe remote area shown fig receivers located remote part record magnetic field shown figure magnetic field passes twice pipe wall phenomenon commonly referred double wall literature magnetic field flows ferromagnetic medium pipe amplitude magnetic field attenuated phase delayed due double wall penetration magnetic field recorded receiver modified different areas pipe flows outward pipe near exciter coil flows backwards pipe remote area hence inferring geometry pipe signal information challenging task since single measurement correlated different areas geometry inferring pipe geometry tool signal corresponds solving inverse problem rfec problem studied literature axisymmetrical case perfect pipe single crack problem formulated recovering shape size fig representation rfec phenomenon global phenomenon propose parametric direct model consider independently flow magnetic field air local attenuation due magnetic field flowing pipe thickness width single defect approaches solve problem using techniques bypass problem recovering full pipe geometry solutions fit case steel material pipe bursts due cracks case pipes material sensitive corrosion hence geometry pipe organic shape rather single isolated crack therefore castiron pipes recovering full pipe geometry critical approaches literature consist modifying tool design use several receivers located different axial locations exciter coil allows using redundancy information provided passing location recover full pipe geometry however approach leads longer tools require electrical power operate multiple sensors exciter coils due nature rfec tools mobility battery consumption optimised particularly work allow simple hardware design consider case elementary rfec tool composed single exciter coil single receiver aim paper obtain inverse sensor model rfec phenomenon given set continuous magnetic field measurements allows recover full pipe geometry axisymmetric scenario remainder paper organised follows sec give conceptual ideas behaviour magnetic field propose direct model solved using least absolute shrinkage selection operation lasso direct model derive inverse model formulated form dataset generated finite element analysis fea experimental results given sec iii finally discuss performance limitations proposed model sec odelling rfec phenomenon direct problem rfec phenomenon consists mapping pipe geometry sensor measurement sensor model conversely inverse indirect problem consists finding model maps sensor measurements pipe geometry main goal solve inverse problem however solving direct problem provides qualitative quantitative information form inverse model consider direct inverse problem discuss insight rfec technology particular attention dedicated understanding geometry near exciter coil impacts sensor measurements qualitative descriptions overall rfec phenomenon broadly studied depth descriptions available literature background information shown possible consider defect pipe geometry anomalous source model defect replaced independent source magnetic field superposed pipe see fig knowing magnetic field gets attenuated travelling ferromagnetic medium idea replace lack attenuation defect source magnetic field superposed perfect pipe following idea one could consider pipe thickness attenuation signal let consider pipe organic geometry corroded pipe defined piecewise constant profile shown fig piece considered local source attenuation dissociate global rfec phenomenon shown fig two part attenuation due magnetic field flowing air local attenuation due magnetic field flowing pipe former one shown fig latter fig global attenuation magnetic field propagating air mostly due field radiating coil constant term given excitation global geometry definition value however complex since involves many parameters dimensions excitation coil diameter pipe distance exciter receiver local interaction electromagnetic wave pipe described plane wave propagating homogeneous isotropic conductive medium pipe phenomenon described deriving skin depth equation maxwell equations written follow phase contribution amplitude magnetic field initial value magnetic field frequency magnetic permeability medium electrical conductivity distance travelled wave amplitude usually measurements recorded rfec tools since linear relationship thickness conductive medium local paper model direct inverse problem uniquely amplitude however similar study could done direct problem consider direct problem consists finding function sensor measurements set thickness values describe pipe geometry around rfec tool let first consider case single measurement using wave superposition principle add follow constant term described ith thicknesses pipe piece pipe geometry approximated piecewise constant profile shown fig unknown parameters embeds location weight since approach approximation actual phenomenon consider noise contribution contains actual sensor noise unmodelled given enough independent measurements optimal values weights found using least square formulation let consider set measurements measurement associated local average thicknesses regularly spaced length tool seen moving tool within pipe simultaneously gathering pipe thickness information sliding window sliding window approximates geometry piecewiseconstant profile describe fig formulate matrix form combine set measurement thickness values together constant term unknown depends excitation number turn coil electromagnetic properties air distance exciter sensor however possible estimate measurements therefore include vector model parameters defined matrix contains local average thickness information tmk vector sensor measurements order select parameters reflects attenuation magnetic field path need optimisation method sets weights thicknesses zero obtained learning model parameters lasso using parameter selection also allows avoiding irrelevant parameter would performed closed form solution formally lasso corresponds least square formulation regularisation min regularisation parameter learned iterative process finally direct problem solved estimating proposed model inverse problem estimating parameters direct model consider inverse problem formally want find inverse function due wall phenomenon simply inverted geometry exciter coil receiver convoluted measurements instead direct problem expressed linear model allows formulating inverse problem closed form solution obtained least squares consider solving inverse problem long pipe section one system recovering thickness full pipe time solve optimisation problem least squares degree freedom equal number equations minus number parameters system positive null rule thumb avoid degree freedom superior ten let consider inspection long pipe section using rfec tool inspection set discrete measurements collected regular intervals along pipe approximate pipeline geometry profile steps average thickness chosen ten times smaller formulated global optimisation problem sensor measurements related piecewise thicknesses defined set thickness estimates value piecewiseconstant pipeline profile defined matrix contains relationship thickness values sensor measurements defined parameters learned direct model practice line contains weights local thickness values set others thickness values since multiple measurements ith values spatial weights used define influence piece proximity fig representation axisymmetric simulation air box present around pipe exciter coil rectangular cross copper coil receiver simplified point measurement pipe defined follow distance point measurement centre ith step distance point measurement centre step obtain thickness estimates solving linear least squares closed form simplified point measurement could simulate hall effect sensor pipe geometry defined pipe segments extracted decommissioned pipeline schematic global system shown fig thickness gains corresponding bell spigot joints link pipe segments together medium approximated homogeneous isotropic air copper material properties defined using materials comsol library get realistic axisymmetric modelisation pipe pipe magnetic properties obtained analysing pipe sample superconducting quantum interference device squid geometry material properties come real pipeline material properties used model displayed tab conductivity air set value avoid computational singularities stability simulation validated different meshing sizes air box sizes parameters meshing size defined according wavelength magnetic field material least five times smaller wavelength defined iii esults fea simulations geometry used validate proposed methods controlled environment look performance direct inverse model applied long pipe section known geometry note although validation done scenario proposed models adapted rfec axisymmetric tool fea environment section describes data validation obtained context particular research project motivates paper used data pipeline decommissioned currently dedicated research purposes particular pipeline laid hundred years ago parts pipe significantly corroded pipes section exhumed analysed material properties measured corrosion profile captured laser scanner using process described generated long profile based geometry exhumed pipe segments incorporated fea simulation environment realistic profile provided sufficient data validation fea used done using comsol multiphysics scenario fea geometry composed four different components air box defining limits fea scenario exciter coil modelled rectangular cross copper coil receiver using magnetic properties material define minimum size meshing part scenario minimum size element meshing given tab pipeline inspection simulated using parameter sweep position rfec tool within pipe length amplitude electromagnetic phase recorded position parameter sweep application direct model consider direct problem applied dataset generated fea environment described previously aim learn parameters defined shown fig note make realistic simulated thickness profile contains joints thickness joints much larger parts pipe hence due linear nature proposed model data relate expected perform poorly solve direct model three datasets first dataset include complete set data data joint located near receiver removed second dataset data near exciter receiver removed third dataset table properties material material air copper coil fig due presence joints inducing sort data model described longer valid therefore remove data joint impact exciter coil impact receivers model learned filtered data shown compared estimated actual sensor measurement fig dedicated dataset fig set colour information reflect impact joints located near receiver yellow points influenced fig blue points represent estimation located top receiver third dataset shown fig shows better regression since simulations done controlled environment locations joints known thus removing particular data trivial task case unknown environment one could classify construction features pipeline done using support vector machine svm classifier alternative would consist automating data selection methods peirce chauvenet criterion parameter chosen using crossvalidation estimated parameters measurements fig evolution mean square error mse versus value parameter using indicated blue corresponds sparsest solution within one standard error mse chosen one goodness fitting mean square error mse coefficient determination available tab expected constant positive term attenuation coefficients negative terms moreover see geometry near receiver near exciter coil important role reflected higher weights application inverse problem solving direct problem parameters required inverse problem known consider recovering metres pipe thickness global problem full geometry recovered set measurement using formulation established inverse problem relies parameters learnt direct problem case parameters rfec tool magnetic properties pipe specimen known possible obtain direct inverse models fea simulation otherwise multiple thickness measurements collected studied pipe thickness measurements specific locations needed learn parameters practice collecting measurements feasible task considering parameters present proposed model pipe profile reconstructed shown fig estimation shown blue ground truth shown orange spikes correspond joints predicted proposed inverse model recover thicknesses due nonlinear behaviour magnetic field regions estimation error thickness pipe mse rmse average thickness pipe around remove areas joints rmse falls table output least absolute shrinkage selection operation localised increase thickness joints lead spread weights visible comparing lines table coef dataset dataset dataset cst exciter iscussion paper tackle direct inverse problems rfec tool composed single exciter coil single receiver shown using fea direct inverse model accurate recovering pipe sections organic geometry often case corroded pipes fea model used generate dataset based realistic geometry material properties obtained old pipes proposed direct model solved using lasso allows selecting automatically important thickness areas model reducing number parameters result simplistic model important thicknesses located next exciter coil receiver inverse problem relies parameters direct problem solved using least squares training proposed inverse model thickness measurements collected pipe practice collecting measurements feasible task considering parameters proposed model main limitation proposed method lies form proposed model linear model allows solving inverse problem model gives accurate results apart joints extremely thick thicknesses magnetic field would flow path least resistance captured linear model furthermore outstanding thicknesses magnetic properties considered constant full pipeline practice pipes variation magnetic properties case studied future work planning apply method tool sensor array case shown thickness estimation receiver distance fig thickness estimated estimation shown blue orange goodness fit mse attenuation exciter behaves circumferential offset therefore possible deconvolute signal similar fashion acknowledgment publication outcome critical pipes project funded sydney water corporation water research foundation usa melbourne water water corporation water industry research ltd south australia water corporation south east water hunter water corporation city west water monash university university technology sydney university newcastle research partners monash university lead university technology sydney university newcastle eferences atherton remote field eddy current inspection ieee transactions magnetics vol davoust brusquet fleury robust estimation flaw dimensions using remote field eddy current inspection measurement science technology vol nov davoust brusquet fleury robust estimation hidden corrosion parameters using eddy current technique journal nondestructive evaluation vol jun tao zhang wang luo design forward modeling rfec inspection cracks proceedings international conference information science electronics electrical engineering iseee vol cardelli esposito raugi electromagnetic analysis rfec differential probes ieee transactions magnetics vol skarlatos pichenot lesselier lambert electromagnetic modeling damaged ferromagnetic metal tube volume integral equation formulation ieee transactions magnetics vol lord sun udpa nath finite element study remote field eddy current phenomen ieee transactions magnetics vol sun cooley han udpa lord efforts towards gaining better understanding remote field eddy current phenomenon expanding applications ieee transactions magnetics vol may tibshirani regression selection shrinkage via lasso journal royal statistical society vol skinner valls miro bruijn falque point cloud upsampling accurate reconstruction dense thickness maps point cloud acquisition australasian conference robotics automation acra miro mart automatic detection verification pipeline construction elements data iros peirce criterion rejection doubtful observations astronomical journal vol william manual spherical practical astronomy philadelphia lippincott london trubner falque valls miro lingnau russell background segmentation enhance remote field eddy current signals australasian conference robotics automation acra | 3 |
sep reduction local uniformization case rank one valuations rings zero divisors josnei novacoski mark spivakovsky abstract continuation previous paper authors former paper proved order obtain local uniformization valuations centered local domains enough prove rank one valuations paper extend result case valuations centered rings necessarily integral domains may even contain nilpotents introduction algebraic variety field problem resolution singularities whether exists proper birational morphism regular problem local uniformization seen local version resolution singularities algebraic variety valuation center local uniformization problem asks whether exists proper birational morphism center regular problem introduced zariski important step prove resolution singularities zariski approach consists proving first every valuation center given algebraic variety admits local uniformization one glue local solutions obtain global resolution singularities zariski succeeded proving local uniformization valuations centered algebraic varieties field characteristic zero see used prove resolution singularities algebraic surfaces threefolds field characteristic zero see abhyankar proved see local uniformization obtained valuations centered algebraic surfaces characteristic used fact prove resolution singularities surfaces see also proved local uniformization resolution singularities threefolds fields characteristic see recently cossart piltant proved resolution singularities particular local uniformization threefolds field positive characteristic well arithmetic case see proved using approach zariski however mathematics subject classification primary secondary key words phrases local uniformization resolution singularities reduced varieties realization project first author supported grant program sem fronteiras brazilian government josnei novacoski mark spivakovsky problem local uniformization remains open valuations centered algebraic varieties dimension greater three fields positive characteristic since local uniformization local problem work local rings instead algebraic varieties valuation centered local integral domain said admit local unifomization exists local local ring dominated dominating regular let category noetherian local domains subcategory closed taking homomorphic images localizing finitely generated birational extension prime ideal want know subcategories properties valuations centered objects admit local uniformization section grothendieck proved category schemes closed passing subschemes finite radical extensions resolution singularities holds subcategory schemes known category schemes closed operations mentioned conjectured see remark resolution singularities holds general possible context schemes translated local situation conjecture says subcategory optimizes local uniformization category local rings subcategory properties discussion excellent local rings see section however conjecture widely open successful cases including mentioned local uniformization first proved rank one valuations general case reduce priori weaker one prove reduction works general assumptions namely consider subcategory category noetherian local integral domains closed taking homomorphic images localizing finitely generated birational extension prime ideal main result every rank one valuation centered object admits local uniformization valuations centered objects admit local uniformization main goal paper extend result rings necessarily integral domains particular may contain nilpotent elements importance schemes modern algebraic geometry well known even one interested reduced schemes start one led consider ones produced natural constructions example deformation theory therefore appears desirable study problem local uniformization schemes particular extend earlier results reducing problem rank one case general context reduced expect general make regular blowings natural extension case require red regular red module every denotes nilradical precise definitions see section let category noetherian local rings subcategory local uniformization closed taking homomorphic images localizing finitely generated birational extension prime ideal main result following theorem assume every noetherian local ring every rank one valuation centered admits local uniformization valuations centered objects admit local uniformization proof theorem consists three main steps first step prove every local ring every valuation centered exists local blowing see definition one associated prime ideal consider decomposition using induction assume admit local uniformization second main step consists using prove exists local blowing red regular third final step prove exists local blowing red regular red module every denotes nilradical paper divided follows section present basic definitions results used sequel sections dedicated prove results related first second third steps respectively last section present proof main theorem preliminaries let noetherian commutative ring unity ordered abelian group set extend addition order usual definition valuation mapping following properties every min every support defined supp minimal prime ideal take multiplicative system supp extension call given valuation indeed three first axioms easily checked minimality supp prime ideal follows fact prime ideals bijective correspondence prime ideals contained freely make extensions without mentioning explicitly valuation said center every case center defined moreover local ring unique maximal ideal case say local ring josnei novacoski mark spivakovsky valuation said centered every every observe valuation center centered value group denoted defined subgroup generated rank number proper convex subgroups element supp consider canonical map given let ker annr natural embedding take consider subring restriction center set definition canonical map called local blowing respect along ideal valuation center say every lemma composition finitely many local blowings local blowing moreover local blowings composition proof enough prove two local blowings respect exists local blowing respect write exist consider local blowing given straightforward prove view lemma freely use fact composition finitely many local blowings local blowing without mentioning explicitly simplicity notation denote nilradical nil local uniformization definition say spec normally flat along spec rred rred module every since noetherian exists every hence condition definition equivalent freeness finitely many modules definition local ring valuation centered said admit local uniformization exists local blowing respect red regular spec normally flat along spec red let fixed decomposition simplicity notation set local blowing set need guarantee main structure preserved local blowings precisely prove following proposition let local blowing canonical maps induced isomorphisms order prove proposition need following basic lemma lemma let multiplicative system contained canonical map given isomorphism proof element consequently suppose means exists thus exists sac moreover since also fact imply hence wanted prove proof proposition applying lemma valuation obtain canonical maps respectively isomorphisms hence order prove first assertion enough show canonical map isomorphism since injective hand element written abm cbn image abm hence map josnei novacoski mark spivakovsky surjective consequently isomorphism set consider induced map since canonical map surjective order prove surjectivity enough show surjective element write set since obtain implies therefore remains prove since also hence consequently unit therefore finish proof enough show kernel follows immediately definition centers respectively lemmas generalizations lemma corollary respectively proofs presented adapted general case present sketches proofs convenience reader respect exists lemma local blowing local blowing respect given proof consider local blowing choose canonical map local uniformization choose minimize value words set hence suitable permutation set may assume every consider local blowing respect straightforward prove lemma local blowing exists local blowing respect respect proof element denote image canonical map since every consider local blowing respect straightforward prove associated prime ideals let local ring valuation centered main result section following proposition exists local blowing respect nil associated prime order prove proposition need following result lemma let every every ideal written moreover prime annr prime ideal josnei novacoski mark spivakovsky proof choose fix write acbn assume prime set also prime moreover annr canonical epimorphism indeed annr since noetherian annr annr annr annr annr conclude annr prime ideal corollary local blowing nil associated prime ideal nil associated prime ideal proof let lemma theorem gives ass ass spec lemma guarantee consequently one associated prime ideal say primary decomposition theorem gives nil wanted prove use corollary throughout paper without always mentioning explicitly proof proposition since supp minimal prime ideal exists one associated prime ideal contained hence equal supp prove exists local blowing take associated prime ideal supp write local uniformization every blowing respect along gives local ring observe indeed local blowing every supp implies supp since ass ass spec see theorem remains show lemma obtain many associated prime ideals moreover chosen associated prime ideal annr every ideal prime indeed since annr means consequently prime therefore remark associated prime ideal every particular case eliminate ideal definition local blowing use throughout paper without mentioning explicitly making rred regular let local ring valuation centered assume denote center usual denote nilradical local blowing denote nilradical assume associated prime ideal main goal section prove following proposition proposition assume red regular exists local blowing red regular moreover every local blowing along ideal regular red order prove proposition need lemmas lemma assume red regular exists local blowing elements whose images free moreover form basis images form regular system parameters red red josnei novacoski mark spivakovsky lemma let local blowing along ideal free free lemma take whose images form regular system parameters red respectively module basis rred regular proof proposition assuming lemmas apply lemma obtain blowing images form red basis images form regular system parameters moreover proposition red regular also lemma proposition every local blowing along ideal hypotheses lemma satisfied hence obtain red red regular proceed proofs lemmas lemma take generators let local blowing along ideal set generated proof obviously every take element implies see remark set implies hence exist thus bai concludes proof proof lemma since red regular elements images red form regular system parameters first step reduce case generate local uniformization assume generate choose generate find brk brk consider local blowing along follows brk since prime obtain consequently proceed inductively obtain local blowing lemma lemma form regular system parameters images red means generate thus reduced problem case generate make assumption fact remains checked images independent take since images red form regular system parameters images prp form prp implies prp consequently completes proof lemma proof lemma take images form claim images form basis take element set write josnei novacoski mark spivakovsky implies assumption exist consequently since fact imply images generate assume exists exists implies every since implies therefore concludes proof proof lemma set rred since images form basis conclude applying nakayama lemma corollary theorem conclude consequently generate since images rred generate rred conclude dim rred also since dim red dim dim rred dim rred therefore dim rred hence rred regular making free let local ring valuation centered assume denote center usual set nil nil also local blowing set nil nil assume associated prime ideal main goal section prove following proposition local uniformization proposition assume ipn red module every exists local blowing respect along ideal red free every order prove proposition need preliminary results lemma take elements images generate rred consider local blowing along ideal set images form set generators module proof take element proof lemma write means exists consequently bai concludes proof lemma assumptions previous lemma images rred independent images red independent proof take elements show write equation implies exists cyr since rred independent implies every since prime consequence fact associated prime ideal obtain consequently concludes proof josnei novacoski mark spivakovsky proof proposition assumption ipn red every hence proposition local blowing every therefore red enough show fixed exists local blowing along ideal red take elements ipn form basis ipn observe first since prime ipn claim generate rred module free indeed exists rred implies consequently since prime conclude wanted prove generate rred take generate since exist brk brk consider local blowing along ideal set equation obtain brk generated every consequently moreover module obtain generated rred red using lemma images local uniformization also lemma images red independent proceed inductively obtain local blowing generated images imr red red independent ages proof main theorem section present proof main theorem proof theorem prove assertion induction rank since rank one valuations admit local uniformization assumption fix prove valuations rank smaller admit local uniformization also valuations rank admit local uniformization let valuation centered local ring lemma exists local blowing respect nil associated prime ideal hence replacing may assume associated prime ideal nil decompose valuations rank smaller assumption know admit local uniformization since admits local uniformization use lemma exists local blowing respect regular red free every replacing may assume red regular ipn red every since admits local uniformization use lemma obtain exists local blowing respect red every replacing regular ipn red assume red regular ipn red every since red regular apply proposition obtain compatible local blowing red regular using every proposition ipn free red proposition exists local blowing red module every moreover since local blowing along ideal conclude using proposition regular concludes red proof references abhyankar local uniformization algebraic surfaces ground fields characteristic ann math abhyankar valuations centered local domain amer math josnei novacoski mark spivakovsky abhyankar resolution singularities embedded algebraic surfaces pure applied mathematics academic press new york london abhyankar simultaneous resolution algebraic surfaces amer math cossart piltant resolution singularities threefolds positive characteristic reduction local uniformization purely inseparable coverings algebra cossart piltant resolution singularities threefolds positive characteristic algebra grothendieck locale des des morphismes inst hautes sci publ math matsumura commutative ring theory cambridge university press novacoski spivakovsky reduction local uniformization rank one case proceedings second international conference valuation theory ems series congress reports zariski samuel commutative algebra vol new zariski local uniformization theorem algebraic varieties ann math zariski reduction singularities algebraic three dimensional varieties ann math josnei novacoski capes foundation ministry education brazil brazil address mark spivakovsky institut toulouse cnrs paul sabatier route narbonne toulouse cedex france address | 0 |
feb efficient batchwise dropout training using submatrices ben graham jeremy reizenstein leigh robinson february abstract dropout popular technique regularizing artificial neural networks dropout networks generally trained minibatch gradient descent dropout mask turning different pattern dropout applied every sample minibatch explore simple alternative dropout mask instead masking dropped units setting zero perform matrix multiplication using submatrix weight hidden units never calculated performing dropout batchwise one pattern dropout used sample minibatch substantially reduce training times batchwise dropout used convolutional neural networks independent versus batchwise dropout dropout technique regularize artificial neural prevents overfitting fully connected network two hidden layers units learn classify mnist training set perfectly training test error quite high increasing number hidden units factor using dropout results lower test error dropout network takes longer train two senses training epoch takes several times longer number training epochs needed increases consider technique speeding training substantially reduce time needed per epoch consider simple fully connected neural network dropout train minibatch samples forward pass described equations matrix units matrix independent bernoulli random variables denotes probability dropping units level matrix weights connecting level level using hadamard multiplication matrix multiplication forgotten include functions rectifier function hidden units softmax output units introduction keep network simple possible network trained using backpropagation algorithm calculate gradients cost function negative respect dropout training trying minimize cost function averaged ensemble closely related networks however networks typically contain thousands hidden units size ensemble much larger number training samples possibly seen training suggests independence rows dropout mask matrices might terribly important success dropout simply depend exploring large fraction available dropout masks machine learning libraries allow dropout applied batchwise instead done replacing row matrix independent bernoulli random variables copying vertically times get right shape practical important training minibatch processed quickly crude way estimating processing time count number floating point multiplication operations needed naively evaluate matrix multiplications specified forwards backwards however take account effect dropout mask see many multiplications unnecessary element weight matrix effectively calculations unit dropped level unit dropped level applying dropout levels renders multiplications unnecessary apply dropout independently parts disappear different sample makes effectively impossible take advantage slower check multiplication necessary multiplication however apply dropout batchwise becomes easy take advantage redundancy literally redundant parts calculations see function apply dropout time saving epoch training time seconds dropout batchwise minibatch size minibatch size figure left mnist training time three layer networks log scales nvidia geforce gtx graphics card right percentage reduction training times moving dropout batchwise dropout time saving network minibatches size increases instead compare batchwise dropout independent dropout binary batchwise dropout matrices naturally define submatrices weight matrices let xdropout denote submatrix consisting hidden units survive dropout let wkdropout denote submatrix consisting weights connect active units level active units level network trained using equations xdropout xdropout wkdropout xdropout dropout dropout redundant multiplications eliminated additional benefit terms memory needed store hidden units xdropout needs less space section look performance improvement achieved using code running gpu roughly speaking processing minibatch batchwise dropout takes long training smaller network data explains nearly overlapping pairs lines figure emphasize batchwise dropout improves performance training testing full matrix used normal scaled factor however machine learning research often constrained long training times high costs equipment section show things equal batchwise dropout similar independent dropout faster moreover increase speed things equal resources batchwise dropout used increase number training epochs increase number hidden units increase number validation runs used optimize train number independent copies network form committee possibilities often useful ways improving test error section look batchwise dropout convolutional networks dropout convolutional networks complicated weights shared across spatial locations minibatch passing convolutional network might represented intermediate hidden layer array size samples output convolutional filters spatial locations conventional use dropout mask shape call independent dropout contrast want apply batchwise dropout efficiently adapting submatrix trick effectively using dropout mask shape looks like significant change modifying ensemble average cost optimized training error rates higher however testing networks gives similar error rates fast dropout might called batchwise dropout fast dropout name already taken fast dropout different approach solving problem training large neural network quickly without overfitting discuss differences two techniques appendix implementation theory matrices addition operation multiplication algorithm suggests bulk processing time spent matrix multiplication performance improvement possible compared networks using independent dropout dropout practice sgemm functions use strassen algorithm naive matrix multiplication performance improvement possible implemented batchwise dropout convolutional neural networks using found using highly optimized cublassgemm function bulk work cuda kernels used form submatrices wkdropout update using worked well better software available http performance may well obtained writing matrix multiplication function understands submatrices large networks minibatches found batchwise dropout substantially faster see figure approximate overlap lines left indicates batchwise dropout reduces training time similar manner halving number hidden units graph right show time saving obtained using submatrices implement dropout note consistency left hand side graph compares batchwise dropout networks networks using independent dropout need implement dropout masks independent dropout means figure slightly undersells performance benefits batchwise dropout alternative independent dropout smaller networks performance improvement issues result gpu utilized implementing batchwise dropout cpus would expect see greater performance gains smaller networks cpus lower bandwidth ratio efficiency tweaks hidden units drop number dropped units approximately small variation standard deviation really dealing binomial random sizes submatrices wkdropout xdropout therefore slightly random interests efficiency simplicity convenient remove randomness alternative dropping unit independently probability subset exactly hidden units uniformly random set subsets still case unit dropped probability however within hidden layer longer strict independence regarding units dropped probability dropping first two hidden units changes slightly also used modified form minibatch gradient descent minibatch updated elements wkdropout element vkdropout denoting momentum corresponding wkdropout update vkdropout wkdropout wkdropout vkdropout momentum still functions autoregressive process smoothing gradients reducing rate decay factor test train errors epochs number dropout patterns used figure dropout networks trained using restricted number dropout patterns independent experiment blue line marks test error network half many hidden units trained without dropout results networks fact batchwise dropout takes less time per training epoch would count nothing much larger number epochs needed train network large number validation runs needed optimize training process carried number simple experiment compare independent batchwise dropout many cases could produced better results increasing training time annealing learning rate using validation adjust learning process etc choose primary motivation batchwise dropout efficiency excessive use efficient datasets used set pixel handwritten digits dataset pixel color pictures artificial dataset designed easy overfit following mnist trained networks dropout input layer dropout hidden layers artificial dataset increased dropout reduced test error cases used relatively small networks would time train number independent copies networks useful order see apparent differences batchwise independent dropout significant noise http mnist first experiment explores effect dramatically restricting number dropout patterns seen training consider network three hidden layers size trained epochs using minibatches size number distinct dropout patterns large assume never generate dropout mask twice independent dropout training see million different dropout patterns batchwise dropout training see times fewer dropout patterns types dropout trained independent networks epochs batches size batchwise dropout got mean test error range independent dropout got mean test errors range difference mean test errors statistically significant explore reduction number dropout patterns seen changed code pseudo randomly generating batchwise dropout patterns restrict number distinct dropout patterns used modified period minibatches see figure corresponds ever using one dropout mask network hidden weights never actually trained input features ignored training corresponds training network half many hidden test error network marked blue line figure error testing higher blue line untrained weights add noise network less thirteen likely networks hidden units dropped every time receive training range thirteen fifty likely every hidden unit receives training pairs hidden units adjacent layers get chance interact training corresponding connection weight untrained number dropout masks increases hundreds see quickly case diminishing returns artificial dataset test effect changing network size created artificial dataset classes containing training samples test samples class defined using independent random walk length discrete cube class generated random walk used produce training test samples randomly picking points along length walk giving binary sequences length randomly flipping bits trained three layer networks hidden units per layer minibatches size see figure looking training error training epochs independent dropout seems learn slightly faster however looking test errors time seem much difference two forms dropout note number training epochs training time batchwise dropout networks learning much faster terms real time independent batchwise test error train error independent batchwise epoch epoch figure artificial dataset classes corresponding noisy observations one dimensional manifold learning using fully connected network rather difficult trained three layer networks hidden units per layer minibatches size augmented training data horizontal flips see figure convolutional networks dropout convolutional networks complicated weights shared across spatial locations suppose layer spatial size features per spatial location operation convolution filters minibatch size convolution involves arrays sizes layer weights dropout normally applied using dropout masks size layers call independent decisions mode every spatial location contrast define batchwise dropout mean using dropout mask shape minibatch convolutional filter either across spatial locations two forms regularization seem quite different things consider filter detects color red picture red truck dropout applied independently law averages message red transmitted high probability loss spatial information contrast independent batchwise independent batchwise test error train error epoch epoch figure results using networks different sizes batchwise dropout chance delete entire filter output experimentally substantial difference could detect batchwise dropout resulted larger errors training implement batchwise dropout efficiently notice dropout masks corresponds forming subarrays wkdropout weight arrays size simply regular convolutional operation using wkdropout makes possible example take advantage highly optimized cudnnconvolutionforward function nvidia cudnn package mnist mnist trained type cnn two layers filters two layers fully connected layer three places applying dropout test errors two dropout methods similar see figure varying dropout intensity first experiment used small convolutional network small filters network scaled version network four places apply dropout test errror independent batchwise epochs figure mnist test errors training repeated three times dropout methods input layer trained network epochs using randomly chosen subsets training images reflected image horizontally probability one half testing used centers images figure show effect varying dropout probability training errors increasing training errors higher batchwise dropout curves seem local minima around batchwise test error curve seems shifted slightly left independent one suggesting given value batchwise dropout slightly stronger form regularization many convolutional layers trained deep convolutional network without data augmentation using notation network form output consists convolutions filters layer layers followed two fully connected layers network million parameters used increasing amount dropout per layer rising linearly dropout third layer dropout even though amount dropout used middle layers small batchwise dropout took less half long per epoch independent dropout applying small amounts independent dropout large creates bandwidth network operation stochastic test errors reduced repetition batchwise dropout resulted average test error testing independent dropout resulted average test error reduced testing independent testing batchwise training error batchwise testing independent training figure results using convolutional network dropout probability batchwise dropout produces slightly lower minimum test error conclusions future work implemented efficient form batchwise dropout things equal seems learn roughly speed independent dropout epoch faster given fixed computational budget often allow train better networks potential uses batchwise dropout explored yet restricted boltzmann machines trained contrastive divergence dropout batchwise dropout could used increase speed training fully connected network sits top convolutional network training top bottom network separated different computational nodes fully connected network typically contains nodes synchronized difficult due large size matrices batchwise dropout nodes could communicate instead reducing bandwidth needed using independent dropout recurrent neural networks disruptive allow effective learning one solution apply dropout parts network batchwise dropout may provide less damaging form dropout unit either whole time period dropout normally used training generally accurate use whole network testing purposes equivalent averaging ensemble dropout patterns however setting analyzing successive frames video camera may efficient use dropout testing average output network time nested dropout variant regular dropout extends properties pca deep networks batchwise nested dropout particularly easy implement submatrices regular enough qualify matrices context sgemm function using lda argument dropconnect alternative form regularization dropout instead dropping hidden units individual elements weight matrix dropped using modification similar one section opportunities speeding dropconnect training approximately factor two references ciresan meier schmidhuber deep neural networks image classification computer vision pattern recognition cvpr ieee conference pages ben graham fractional http hinton salakhutdinov reducing dimensionality data neural networks science science alex krizhevsky learning multiple layers features tiny images technical report alex krizhevsky one weird trick parallelizing convolutional neural networks http cun bottou bengio haffner learning applied document recognition proceedings ieee november oren rippel michael gelbart ryan adams learning ordered representations nested dropout http nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever ruslan salakhutdinov dropout simple way prevent neural networks overfitting journal machine learning research ilya sutskever james martens george dahl geoffrey hinton importance initialization momentum deep learning icml volume jmlr proceedings pages wan matthew zeiler sixin zhang yann lecun rob fergus regularization neural networks using dropconnect jmlr sida wang christopher manning fast dropout training jmlr wojciech zaremba ilya sutskever oriol vinyals recurrent neural network regularization http fast dropout might called batchwise dropout fast dropout name already taken fast dropout alternative form regularization uses probabilistic modeling technique imitate effect dropout hidden unit replaced gaussian probability distribution fast relates reducing number training epochs needed compared regular dropout reference results training network mnist dataset input dropout dropout fast dropout converges test error epochs appears substantially better test error obtained preprint epochs regular dropout training however dangerous comparison make authors used scheme designed produce optimal accuracy eventually one hundred epochs tried using batchwise dropout minibatches size annealed learning rate trained network two hidden layers rectified linear units training epochs resulted test error epochs test error reduced moreover per epoch faster regular dropout slower assuming make comparisons across different epochs batchwise dropout training take less time epoch fast dropout training http using software implement network batchwise dropout training epoch take times long independent dropout figures given ratio independentdropout using minibatch sgd using train networks training time per epoch presumably even times longer use requiring additional forward passes neural network | 9 |
matching learning ramesh vijay yash june jun abstract consider problem faced service platform needs match supply demand also learn attributes new arrivals order match better future introduce benchmark model heterogeneous workers jobs arrive time job types known platform worker types unknown must learned observing match outcomes workers depart performing certain number jobs payoff match depends pair types goal maximize rate accumulation payoff main contribution complete characterization structure optimal policy limit worker performs many jobs platform faces worker myopically maximizing payoffs exploitation learning type worker exploration creates multitude bandit problems one worker coupled together constraint availability jobs different types capacity constraints find platform estimate shadow price job type use payoffs adjusted prices first determine learning goals worker balance learning payoffs exploration phase myopically match achieved learning goals exploitation phase keywords matching learning platform bandit capacity constraints introduction paper considers central operational challenge faced platforms serve matchmakers supply demand platforms face fundamental one hand efficient operation involves making matches generate value exploitation hand platform must continuously learn newly arriving participants efficiently matched exploration paper develop structurally simple nearly optimal approach resolving model consider two groups participants workers jobs terminology inspired online labor markets upwork remote work handy housecleaning thumbtack taskrabbit local tasks etc however model viewed stylized abstraction many matching platforms well time discrete new workers jobs arrive beginning every time period workers depart performing stanford university rjohari stanford university vjkamble columbia business school ykanoria specified number jobs time worker job matched random payoff generated observed platform payoff distribution depends worker type job type emphasis interaction matching learning model several features focus analysis paper first assume platform centrally controls matching beginning time period platform matches worker system available job second strategic considerations modeled remains interesting direction future work finally focus goal maximizing rate payoff describe learning challenge faced platform platforms known one side platform accordingly assume job types known type new worker unknown platform learns workers types payoffs obtained matched jobs however supply jobs limited using jobs learn reduce immediate payoffs well deplete supply jobs available rest marketplace thus presence capacity constraints forces carefully design exploration exploitation matching algorithm order optimize rate payoff generation main contribution paper development matching policy nearly payoff optimal algorithm divided two phases worker lifetime exploration identification worker type exploitation optimal matching given worker identified type refer policy deem decentralized matching develop intuition solution consider simple example two types jobs easy hard two types workers expert novice experts types tasks well novices easy tasks well suppose limited supply easy jobs mass novices available less total mass novices experts particular maximize payoff platform must learn enough match experts hard jobs deem several key features understood context example first deem natural decentralization property determines choice job type worker based worker history decentralization arguably essential online platforms matching typically carried individual basis rather centrally order accomplish decentralization essential algorithm account externality rest market worker matched given job example easy jobs relatively scarce matching worker job makes unavailable rest market approach price externality find shadow prices capacity constraints adjust payoffs downward using prices second algorithm design specifies learning goals ensure efficient balance exploration exploitation particular example note two kinds errors possible exploring misclassifying novice expert vice versa occasionally mislabeling experts novices catastrophic experts need easy jobs anyway algorithm account errors exploitation phase thus relatively less effort invested minimizing error type however mistakenly labeling novices experts catastrophic case novices matched hard jobs exploitation reasonable proxy goal platform say takes fraction total surplus generated matches generally believe benchmark problem whose solution informs algorithmic design settings related objectives revenue maximization phase causing substantial loss payoff thus probability errors must kept small major contribution work precisely identify correct learning goals exploration phase design deem meet learning goals maximizing payoff generation third deem involves carefully constructed exploitation phase ensure capacity constraints met maximizing payoffs naive approach exploitation phase would match worker job type yields maximum payoff corresponding type label turns approach leads significant violations capacity constraints hence poor performance reason generic capacitated problem instance one worker types indifferent multiple job types suitable necessary achieve good performance theoretical development achieve modifying solution static optimization problem known worker types whereas practical implementation deem achieves appropriate via simple dynamically updated shadow prices main result theorem shows deem achieves essentially optimal regret number jobs performed worker lifetime grows regret loss payoff accumulation rate relative maximum achievable known worker types setting lower bound regret log function system parameters deem achieves level regret leading order achieves regret log log situations inherent tension goals learning payoff maximization develop intuition consider expanded version example worker either expert novice programmer well expert novice graphic designer suppose supply jobs worker types known expert graphic designers also novice programmers would matched graphic design learning worker types expert graphic designers must matched approximately log programming jobs learn whether novice expert programmers turn whether matched graphic design programming jobs respectively thus log average regret per period incurred relative optimal solution known types deem precisely minimizes regret incurred distinctions made thus achieving lower bound regret theory complemented practical heuristic call optimizes performance small values implementation simulation demonstrates natural way translating work practice particular simulations reveal substantial benefit jointly managing capacity constraints learning deem remainder paper organized follows discussing related work section present model outline optimization problem interest platform section section discuss three key ideas design deem present formal definition section present main theorem discuss optimal regret scaling section present sketch proof main result section discuss practical implementation deem present heuristic section use simulations compare performance deem bandit algorithms conclude section proofs appendices would case programming jobs high demand valuable conditional successful completion graphic design jobs related literature foundational model investigating tradeoff stochastic bandit mab problem goal find adaptive policy choosing among arms unknown payoff distributions regret measured expected payoff best arm closest work literature paper agrawal model assume joint vector arm distributions take one finitely many values introduces correlation across different arms depending certain identifiability conditions optimal regret either log model analog job types arms worker solve mab problem identify true type worker among finite set possible worker types work also related recent literature mab problems capacity constraints refer broadly bandits knapsacks formulation classical mab problem modification every pull arm depletes vector resources limited supply formulation subsumes several related problems revenue management demand uncertainty budgeted dynamic procurement variety extensions recently significant generalization problem contextual bandit setting concave rewards convex constraints considerable difference model bandits knapsacks bandits knapsacks consider single mab problem fixed time horizon setting hand seen system ongoing arriving stream mab problems one per worker mab problems coupled together capacity constraints arriving jobs indeed noted introduction significant structural point solve problems decentralized manner ease implementation online platforms conclude discussing directions work related paper number recent pieces work consider efficient matching dynamic twosided matching markets related class dynamic resource allocation problems online bipartite matching also well studied computer science community see survey similar current paper fershtman pavan also study matching learning mediated central platform relative model work constraints number matches per agent consider agent incentives finally recent work studies pure learning problem setting similar capacity constraints type similarities style analysis paper focuses exclusively learning exact type rather balancing exploration exploitation paper model optimization problem section first describe model particular describe primitives platform workers jobs givea formal specification matching process study conclude precisely defining optimization problem interest solve paper preliminaries workers jobs convenience adopt terminology workers jobs describe two sides market assume fixed set job types fixed set worker types key point model consider continuum model evolution system described masses workers particular time step mass workers type mass jobs type arrive follows model scenario type uncertainty exists workers platform know types arriving jobs exactly need learn types arriving workers also assume arrival rates jobs workers known platform later section discuss platform might account possibility parameters unknown matching payoff matrix mass workers type matched mass jobs type assume fraction mass matches generates reward per unit mass fraction generates reward zero per unit mass formal specification meant capture model setting matches type workers type jobs generate bernoulli payoff concern division payoffs workers employers paper instead assume platform goal maximize total rate payoff call matrix payoff matrix throughout assume two rows key assumption work platform knows matrix particular considering platform enough aggregate information understand compatibility different worker job types however given worker newly arriving platform platform know worker type thus perspective platform uncertainty payoffs period although platform knows given mass workers type exist platform identity workers type known define empty job type worker types matched generate zero reward view representing possibility worker goes unmatched thus assume unbounded capacity job type available worker lifetimes imagine arriving worker lives system time opportunity matched job time step job takes one unit time complete assume platform knows note total mass workers type system time step theoretical analysis later consider scaling regime remains fixed regime worker lifetimes grow infinity arrival rates scale total mass workers type available time period remains fixed generalized imbalance throughout technical development make mild structural assumption problem instance defined tuple captured following definition say arrival rates satisfy generalized imbalance condition pair nonempty subsets worker types job formally seen continuum scaling discrete system see would case platform operator takes fixed percentage total payoff generated match mild requirement simply ensures possible principle distinguish pair worker types analysis results generalize random worker lifetimes across workers different types mean distribution lifetime exceeds high probability types total worker arrival rate exactly matches total job arrival rate formally generalized imbalance condition holds note condition depend matrix worker history define state system resulting matching dynamics need notion worker history worker history tuple job type worker matched time step system corresponding reward obtained note since workers live jobs histories let denote empty history system dynamics goal model following process operator observes point time distribution histories workers platform also knows job arrival rate matching policy platform amounts determining mass workers type history matched type jobs ultimately process generate high payoffs time platform must choose jobs learn worker types order optimize payoffs intuition mind give formal specification system dynamics system profile system profile joint measure worker histories worker types mass workers system history type evolution system dynamical system system matching policy describe dynamics assume platform uses matching policy match entire mass workers jobs time step think unmatched workers matched empty job type assume mass jobs left unmatched given period disappears end period results depend assumption suppose system starts time workers system matching policy system specifies time given system profile mass workers history matched jobs type particular let denote fraction workers history matched jobs type time given system profile thus note matching policy acts worker history true type worker platform assumed know worker types except learned history dynamics features completely determine evolution system profile observe total mass workers type history set condition holds open dense strictly positive real numbers platform directly observepthe system profile infer platform observes mass workers possible history infer individually using knowledge arrival rates matrix allows calculate likelihood seeing sequence outcomes worker type together bayes rule follows ultimately consider analysis dynamical system initial conditions irrelevant long initial mass workers bounded matched jobs type time given policy system profile decentralization policies note general policies may may complex dependence system profile consider much simpler class policies call policies policies exists words policy fraction workers history matched jobs type depend either time full system profile thus policies decentralized obvious concern point policy allocate jobs type formalize capacity constraint particular policy exceed capacity job type period satisfies let denote class policies given section appendix establish suffices restrict attention policies satisfy remark feasible policy exists policy satisfying capacity constraints achieves payoff accumulation rate arbitrarily close former policy particular policies satisfying capacity constraints suffice achieve highest possible payoff accumulation rate steady state policy first suppose capacity constraints consider system dynamics assuming system initially starts empty dynamics yields unique steady state inductively computed refer measure steady state induced policy routing matrix policy system steady state time period induces fraction mass workers type assigned type jobs call routing matrix achieved policy row stochastic matrix row sums observe mass demand jobs type workers type time period total mass demand jobs type time period let set routing matrices achievable worker jobs policies note capacity constraints ignored definition appendix show convex polytope see proposition optimization problem paper focuses maximization rate payoff accumulation subject capacity constraints leads following optimization problem maximize subject objective rate payoff accumulation per time period expressed terms routing matrix induced policy constraint capacity constraint system stable total demand jobs type greater arrival rate jobs type since convex polytope linear program albeit complex one complexity problem hidden complexity set includes possible routing matrices obtained using policies remainder paper devoted solving problem characterizing value considering asymptotic regime benchmark known worker types evaluate performance relative natural benchmark maximal rate payoff accumulation possible worker types perfectly known upon arrival case stochastic matrix feasible routing matrix let denote set stochastic matrices note routing matrix implementable simple policy known worker types given desired routing matrix route fraction workers type jobs type thus known worker types maximal rate payoff accumulation given solution following optimization problem maximize subject let denote maximal value preceding optimization problem let denote solution linear program special case static planning problem arises frequently operations literature see problem also viewed version assignment problem due shapley shubik resources divisible regret evaluate performance given policy terms regret relative particular given policy satisfying define regret focus asymptotic regime try find policies small regret regime asymptotic regime allows identify structural aspects policies perform well appendix see proposition show relatively easy design policies achieve vanishing regret even regret within constant factor smallest possible idea straightforward informally large policies explore vanishing fraction worker lifetimes able learn worker true type sufficiently well yield rate payoff accumulation regret converges zero limit reason analysis focuses refined notion asymptotic optimality particular focus developing policies achieve nearly optimal rate regret approaches zero formalized theorem note terminology note intuitively policies feature decisions taken basis history given worker basis system profile whole sequel typically refer probability worker history matched job type use terminology make presentation intuitive since intention algorithms implemented level individual worker history however formalize arguments emphasize proofs translate fraction workers history matched job type correspondence applies throughout technical development decentralized matching deem policy section present design sequence policies achieves nearly optimal rate convergence refer policy design deem gret decentralized matching main result stated next section theorem exactly quantify regret performance deem upper bound regret characterize nearly optimal lower bound regret feasible policy begin understand challenges involved consider example figure example two types workers novice expert mass present steady state two types jobs easy hard arriving rate jobs workers easy hard expert novice figure example make several observations regarding example inform subsequent work benchmark example optimal solution benchmark problem known types routes novices easy jobs mass experts easy jobs mass experts hard jobs course problem know worker types arrival capacity constraints affect optimal policy need learn easy hard jobs infinite supply policy matches workers easy jobs optimal however finite supply available easy jobs workers must hard jobs workers clearly payoff optimality optimal policy aim match experts hard jobs possible first learns worker expert structure type worker learnt matching hard jobs perform well jobs experts fail novices minimizing regret requires learning front assigning workers unknown type hard jobs necessarily incurs regret relative benchmark indeed novices unknowingly matched hard jobs lead regret per unit mass workers period minimizing regret therefore requires algorithm learn worker types also relatively early lifetime workers identified experts assigned many hard jobs work leads structure separate policy exploration exploitation phases policy first tries learn worker type exploits assigning worker jobs assuming learned type correct exploration phase length log short relative worker lifetime mistakes exploration phase worse others two kinds mistakes policy make learning mistakenly identify novices experts mistakenly identify experts novices mistakes differ impact regret suppose end exploration phase algorithm misclassifies novice expert dire impact regret novice assigned hard jobs exploitation phase noted incurs regret per unit mass workers misclassified way per unit time thus must work hard exploration phase avoid errors hand suppose end exploration phase algorithm misclassifies expert novice mistake far less consequential workers misclassified way assigned easy jobs mass experts must assigned easy jobs even benchmark solution known types therefore long misclassified mass large adjust exploitation phase discussion highlights need precisely identify learning goals algorithm minimize regret strongly worker type need distinguished others major contribution work demonstrate optimal construction learning goals regret minimization noted capacity constraints fundamentally influence learning goals algorithm remainder section describe key ideas behind construction policy highlighted issues raised preceding example formally describe deem section state main theorem section key idea use shadow prices externality adjustment payoffs begin first noticing immediate difficulty arises using policies presence capacity constraints policies decentralized act history worker use aggregate state information system conveys whether capacity constraints met order solve therefore need find way adjust capacity constraints despite fact policy acts level worker histories key insight use shadow prices capacity constraints adjust payoffs measure regret respect adjusted payoffs recall linear program let optimal shadow prices dual variables capacity constraints standard duality results follows policy optimal also optimal following unconstrained optimization problem thus one may attempt account capacity constraints using shadow challenge set quite complex thus characterizing optimal shadow prices reasonable path forward instead use optimal shadow prices benchmark linear program known types adjust payoffs measure regret respect adjusted payoffs practical heuristic implement uses different approach estimate shadow prices see section let denote vector optimal shadow prices capacity constraint problem known types using generalized imbalance condition show prices uniquely determined see proposition appendix although large platform able learn type worker type early lifetime leading small motivates analog develop algorithm problem constraints job capacities violated complementary slackness conditions satisfied job type fully utilized show leads upper bound main result key idea meet required learning goals minimizing regret noted discussion example figure must carefully define learning goals algorithm worker types need distinguished others level confidence key contribution work formalize learning goals algorithm section define learning goals algorithm outline exploration phase meets goals let set optimal job types worker type defined arg standard duality argument demonstrates optimal solution benchmark worker type assigned jobs effort needed ensure policy violate capacity constraints complementary slackness holds recall example figure far important misclassify novice expert misclassify expert novice formalize distinction following definition definition say type needs strongly distinguished type worker type let str set types needs strongly distinguished str words means needs strongly distinguished least one optimal job type optimal whereas needs weakly distinguished optimal job types also optimal definition easily understood example figure subsequent discussion particular note example benchmark shadow prices easy hard thus novice easy expert easy hard thus experts need strongly distinguished novices since hard jobs optimal experts novices hand novices need weakly distinguished experts since easy jobs optimal experts well exploration phase algorithm goal classify worker type quickly possible preceding definition use formalize learning goals phase particular consider making error true type misclassify str probability error misclassification error tolerable grows large example figure choose log target error probability kind error hand str optimal target error probability much smaller particular optimal target error probability shown approximately choose larger target incur relatively large expected regret exploitation due misclassification choose smaller target exploration phase unnecessarily long thus incur relatively large regret exploration phase learning goals defined exploration phase deem operates one two subphases either guessing confirmation follows every job allocation opportunity check whether posterior probability maximum posteriori map estimate worker type sufficiently high probability low say policy guessing subphase exploration phase job type chosen random next match hand high particular greater log times posterior probability worker type say policy confirmation subphase exploration phase regime policy works confirm map estimate specifically confirmation subphase policy focuses strongly distinguishing map types str must done minimum regret frame optimization problem see essentially goal find distribution job types minimizes expected regret confirmation goals met confirmation subphase policy allocates worker jobs according distribution type confirmed conclude briefly explaining role guessing phase minimizing regret informally guessing necessary confirmation minimizes regret correct worker type high probability particular suppose two worker types optimal job types case payoff maximization require distinguishing nevertheless possible confirmation policies differ without necessarily distinguishing case first needs distinguished probability error achieve optimal regret leading order concretely guessing phase map early worker lifetime policy never discover mistake ultimately confirm using wrong policy incurring additional leading order regret log key idea optimally allocate exploitation phase meeting capacity constraints algorithm completes exploration phase enters exploitation phase phase algorithm aims match worker jobs maximize rate payoff generation given confirmed type label naive approach would match worker labeled type job type since optimal job types worker type externality adjustment approach turns fail spectacularly generically leads regret occurs set fixed shadow prices see need following fact fact generalized imbalance long least one capacity constraint binding optimal solution benchmark problem known types least one worker supported multiple job types fact implies appropriate multiple optimal job types necessary exploitation one worker types order achieve vanishing regret order implement appropriate suppose assign jobs exploitation phase using routing matrix solves benchmark problem case worker confirmed type matched job type probability however naive approach needs modification overcome two issues first capacity used exploration phase effective routing matrix exploration phase match second exploration phase end incorrectly classified worker type policy exploitation phase chooses routing matrix resembles addresses two concerns raised preceding paragraph crucially chosen ensure job types assigned positive probability satisfy complementary slackness conditions show proposition using fact indeed exists large enough generalized imbalance condition show compute note fixed routing matrix implemented decentralized manner comment largely theoretical device used obtain provable regret optimality policy implementation deem see section propose far simpler solution use dynamically updated shadow prices automatically achieve appropriate shadow prices respond manner based currently available supply different job types price job type rises available supply falls particular fluctuations shadow prices naturally lead necessary tiebreaking efficient exploitation formal definition deem based discussion section provide formal definition policy first define maximal utility choose arg min distributions bernoulli bernoulli set distributions idea sampling job types allows policy distinguish simultaneously str incurring smallest possible regret appendix show written small linear program optimization problem multiple solutions pick one largest denominator hence largest numerator well thus maximizing learning rate subject optimality choose arg max min discuss details appendix let job type chosen opportunity outcome let define let denote likelihood observed history job worker type let mapk arg map estimate based history define ratio posterior probabilities type convenience refer prior odds relative posterior odds relative jobs deem defined follows phase exploration suppose mapk guessing subphase log choose next job type uniformly random confirmation subphase strongly distinguish types str log draw next job type distribution exit condition exploration phase log worker labeled type policy moves exploitation phase worker never returned exploration phase phase exploitation every job opportunity worker confirmed type choose job probability routing matrix specified proposition appendix system capacity constraints violated steady state main result main result following theorem particular prove lower bound regret constructed preceding section essentially policy show sequence policies achieves lower bound divergence bernoulli bernoulli distribution defined log log theorem fix two rows identical generalized imbalance condition holds constant lower bound policy feasible log feasible upper bound sequence policies log log log constant appears theorem depends primitives problem defined follows min note captures regret per unit mass service opportunities workers type informally instances conflict exploration learning worker type exploitation maximizing payoffs larger values case corresponds instances goals learning regret minimization aligned learning require regret log case result establishes chosen policies nearly asymptotically optimal within log log hand instances instances tension learning payoffs instances result establishes chosen policies achieve asymptotically optimal regret upto leading order constant best understood terms definition exploration phase note jobs fixed workers true type smallest easy workers hard value log posterior odds log expected rate expert confirmation thus large time taken confirm worker types str app proximately log hence novice regret incurred confirmation complete per unit mass workers type approximately figure example log optimizing log regret unavoidable results expected regret nearly log must incurred strong distinguishing goals met unit mass workers type translates expected regret nearly log log owing workers type per time unit reasoning forms basis lower bound formalized proposition appendix regret log unavoidable develop intuition case consider example modified payoff matrix shown figure shown case regret log unavoidable event true type worker novice problem following distinguish novices experts policy must allocate workers hard jobs hard jobs strictly suboptimal novices true type worker novice regret unavoidable particular develop intuition magnitude regret imagine policy assigns workers hard jobs first steps leading absolute regret per unit mass workers based realized payoffs estimates worker type confidence exp worker estimated novice policy choose assign easy jobs worker however means learning worker type expected contribution absolute regret times probability worker truly expert exp per unit mass workers combining see total absolute regret least exp log lifetime unit mass workers log needed achieve log absolute regret divide obtain regret per unit mass service opportunities workers discussion motivates following definition definition consider worker type suppose exists another type say ordered pair difficult type pair similar definition also appears modification sets defined respect payoffs account capacity constraints constant difficult type pair general none job types allow distinguish jobs strictly suboptimal policy achieves small regret must distinguish must assign worker jobs outside make distinction leads regret log per unit mass workers type lifetime workers hand difficult type pair conflict learning regret minimization one show value attained distribution supported see note fully supported numerator however type difficult type pair denominator strictly positive thus case main result says algorithm achieves regret log log regret basically results uniform sampling job types guessing phase accounts log log fraction lifetime worker proof sketch proof theorem found appendix present sketch critical ingredient proof following relaxed optimization problem capacity constraints capacity violations charged prices optimization problem known worker types max fact proof demonstrates regret brought choosing different threshold guessing phase lower bound regret least one difficult pair worker types section upper bound performance policy problem expressed relative result follows directly log precisely constant appearing standard duality argument know hence bound holds well see proposition yielding lower bound regret original problem feasible problem upper bound regret two key steps proving log arbitrary routing matrix first show policy supported achieves near optimal performance single bandit problem formally abuse notation let denote value attained policy problem log log log log shown proposition thus problem next part proof show design routing matrix following conditions depends exploitation phase policy satisfied complementary slackness feasibility choice exploitation shown proposition deduce phase feasible problem complementarity slackness property implies yielding upper bound regret correct label worker construction end exploration phase learned confidence least fact coupled generalized imbalance condition leading flexibility modifying fact sufficient ensure appropriate feasible choice correct deviations terms capacity utilizations job types arising short exploration phase infrequent cases exploitation based incorrect worker label coming exploration phase proves result similar policy practical considerations heuristic theoretical analysis deem focused asymptotic regime section focus number practical considerations arise considering implementation policy like deem first discuss practical approach managing capacity constraints via dynamic shadow prices second discuss two modifications algorithm improve performance finite suggest modified heuristic call next section simulate deem evaluate performance dynamic shadow prices key step making deem practical use dynamic shadow prices based imbalances market mathematical model assumed masses new workers jobs arrive instantaneously beginning period instantaneously matched job either gets matched immediately arrival disappears end period however real platforms arrivals departures matchings workers jobs occur sequentially continuous time settings common platforms maintain queue jobs type grows new jobs arrive continuous time shrinks existing jobs matched scenario queue length time leveraged compute instantaneous shadow price job type utilized externality adjustment payoffs reasonable approach set shadow price job type via decreasing function corresponding queue length one natural way follows assume practice arriving jobs accumulate queues different types finite capacity capacity exceeded jobs lost queue length job type instant set price instant thus price lies note changes every time job assigned worker new job arrives queue length changes implement analog approach simulated marketplace next section computing prices fashion obviates need explicitly compute exploitation phase policy instead exploitation phase implemented allocating optimally worker given current prices still fully decentralized solution natural fluctuation prices ensures appropriate allocation fact prices incorporated implementation deem following way modifying exploration exploitation phases computing confirmation phase deem replace instantaneous shadow prices similarly exploitation phase instead explicitly computing routing matrix use prices decide assignments following manner define sets arg max assignment made exploitation phase worker already labeled type job type chosen note typically singleton learning goals platform also determine strong distinction requirements see definition set str based sets induced instantaneous prices defined approach suffers drawback random fluctuations shadow prices around mean values result changes sets hence learning goals could detrimental performance policy hand fluctuations essential appropriate across multiple optimal job types exploitation phase thus propose following modification utilize average recent prices within fixed recent window time modify definition incorporate small tolerance set str remains unaffected fluctuations prices precise window size let unweighted average queue length based prices seen past epochs changes price note changes every time job assigned worker also new jobs arrive next tolerance define max set str see definition defined based improving performance finite regime propose two changes improve performance finite regime first recall worker type optimal job type optimal worker type deem tries achieve probability misclassifying worker type type small however better desired probability error explicitly depend much regret type incurs performing job regret small worth trying make distinction high precision particular str define max highest regret incurred type matched suboptimal job optimal type reasonable approach aim probability misclassification instead thus accounting fact small tolerate higher probability error second change propose explicitly incorporate posterior exploration phase recall deem guess confirm exploration phase guessing optimized rather involves exploration uniformly random finite gain instead leveraging posterior round appropriately allocate confirmation effort across different types learning goals met type principle approach subsume guessing confirmation phases uniformly defined exploration phase challenge precisely describe posterior used guide exploration phase practice continue benefit learning exploitation phase instead optimizing payoff confirmed worker label optimize current map estimate thus accounting possibility may confirmed incorrectly clearly improve performance practical heuristic finite subsection incorporate two suggestions preceding subsection formal heuristic refer convenient define str define follows str str log otherwise next matches type define set types remains distinguished opportunities case true worker type effort ideally directed towards distinctions order speed confirmation next define posterior probability worker type opportunity defined follows phase exploration phase long allocations choose job distribution satisfies arg min log log computed solution linear program shown appendix exit exploration opportunity worker type label worker type enter exploitation phase phase exploitation exploitation defined deem shares structure deem changes exploration phase optimize learning finite see simulations optimizations allow substantially outperform deem small although exact analysis beyond scope work conjecture inherits asymptotic performance bounds hold deem informally consider periods log map estimate type periods analogous guessing phase deem similar deem one argue phase accounts log log fraction worker lifetime stages log similarly analogous confirmation subphase deem informally argue event true type posterior distribution quickly concentrates sufficiently policy defined adjustment denominator precisely accounts differences regret incurred making distinction asymptotically achieves regret confirmation leading order term stationary randomized policy defined deem see observe objective function converges objective function computing modulo log log factors simply capture fact learning goals adjusted small like deem analogously implemented using shadow prices instead sets str computed using smoothed prices simulations section simulate deem market environment shadow prices compare performance policies greedy policy well benchmark mab approaches consider instances types workers types jobs assume generated instances instance independently sampled uniform distribution entry expected payoff matrix sampled uniform distribution given instance simulated marketplace described follows arrival process time discrete assume beginning time period number workers type jobs type arrive sequences scaling constant assumed recall instances deterministic generated binomial distribution mean worker stays system periods leaves job requires one period perform queues assume arriving jobs accumulate queues different types finite buffer capacity choose buffer capacity exceeded job type remaining jobs lost matching process beginning period new workers jobs arrived platform sequentially considers worker generates assignment based history worker chosen policy job required type unavailable worker remains unmatched match random payoff realized drawn distribution specified tuple added history worker prices platform maintains prices jobs following way queue length job type instant price instant set prices thus change either new jobs arrive beginning period job gets matched worker remark choice instances test instances entries expected payoff matrix distinct conjecture would typically case many settings practice exact indistinguishability different worker types using particular binomial distribution two parameters number trials probability success trial chose generating note since consists new workers workers arrived past periods beginning time policy deem deem performance ratio avg perf ratio std error table average performance ratios different policies across instances along standard errors figure empirical cdf performance ratios different policies job type commonly encountered instances discussed section conflict learning regret minimization confirmation subphase deem incurs regret log log leading order term results entirely regret incurred due uniform sampling job types guessing fact case show greedy policy maximizes payoff map estimate throughout exploration phase enters exploitation strong distinction requirements met worker type incurs regret would thus appear greedy policy attractive simplicity would reasonable solution cases however simulations show small lead significant gains greedy approach discuss result results implemented five policies deem versions ucb thompson sampling greedy compared performance algorithms measure payoffs adjusted shadow prices way effectively account capacity constraints already described implementation deem variants earlier greedy simply chooses job type maximizes instantaneous shadow price adjusted payoff map estimate throughout lifetime ucb well known algorithms standard stochastic bandit problem details implementation presence shadow prices denote found appendix figure shows cumulative distribution function instances ratio payoff generation rate attained policy optimal payoff generation rate worker types known five candidate policies average ratios sample space policy given table one observe significantly outperforms deem deem perform considerably better ucb average presumably benefiting knowledge informally since every job make every possible distinction worker types probability true worker type identified map estimate opportunity decays exp instance dependent constant thus total expected regret lifetime worker bounded expectation shadow prices exploitation rendered unnecessary moreover allow algorithm continue benefit learning exploitation optimizing current map estimate rather confirmed type thus distinction exploration exploitation disappears pected payoff matrix contrast deem actively experiment order learn quickly deem experiments guessing phase uniformly samples job types experiments due sampling posterior see appendix details especially early stages posterior sufficiently concentrated experimentation desirable neither efficiently trade payoff maximization learning resulting degraded performance comparison hand suffers excessive exploitation resulting performance although better deem still significantly worse focus latter difference discussed remark earlier instances without exactly indistinguishable type pairs expected perform reasonably well however simulations see average across instances results reduction regret compared order gain intuition gain observe although exactly indistinguishable type pairs rarely encountered could frequently case two type pairs expected payoffs job type close enough practically would take long distinguish reasonably small probability error results approximately difficult type pairs two types different optimal job types none optimal job types able distinguish reasonably quickly situations point map estimate greedy policy true type exploiting may allow policy recover bad estimate within reasonable number jobs thus incurring high regret high probability encountering situation early stages algorithm confidence map estimate sufficiently high approach appropriately allocating confirmation efforts depending posterior results significant gains performance greedy approach particular situations appropriately prioritize learning actively explore instead simply choosing optimal job type map estimate example suppose two types close difficult pair learning rate offered optimal job type towards distinction close case even current map estimate high confidence although still exploration phase instead choosing optimal job type may choose job type quickly distinguish thus expect outperform significantly situations approximately difficult type pairs order verify indeed case first formally define simple notion approximate indistinguishability difficulty say type pair using job type otherwise say say type pair picked instances least one pair instances least one type pair exists job type pair call instances instances precisely instances measured exploration cases map estimate worker type forms type pair type lead significant gains note increases set instances satisfy note job distinguish misclassification error sample number instances avg regret reduction table percentage reduction regret relative greedy average sample consisting instances sample consisting instances conditions grows progressively larger instance also instance next considered two sets samples sample set instances sample set instances based discussion expect substantial reduction regret instances relative instances indeed consistent intuition one tailed two sample showed mean percentage reduction regret sample larger sample sample average percentage reduction regret two samples given table conclusion work suggests novel practical algorithm learning matching applicable across range online matching platforms several directions generalization remain open future work first consider model richer model types would admit wider range applications workers jobs may characterized features space compatibility determined inner product feature vectors second model includes uncertainty general market include uncertainty supply demand exhibit type uncertainty expect similar approach using externality prices first set learning objectives achieve incurring minimum regret applicable even general settings third recall assumed expected surplus match worker type job type matrix known platform reflects first order concern many platforms aggregate knowledge available learning individual user types quickly challenging nevertheless may also interest study efficiently learned platform direction may related issues addressed literature single bandit capacity constraints conclude noting model ignores strategic behavior participants simple extension might presume workers less likely return several bad experiences would dramatically alter model forcing policy become conservative modeling analysis strategic behaviors remain important challenges references rajeev agrawal demosthenis teneketzis venkatachalam anantharam asymptotically efficient adaptive allocation schemes controlled iid processes finite parameter space strong distinction jobs average reasonably quick automatic control ieee transactions shipra agrawal nikhil devanur bandits concave rewards convex knapsacks proceedings fifteenth acm conference economics computation pages acm shipra agrawal nikhil devanur linear contextual bandits global constraints objective arxiv preprint shipra agrawal navin goyal analysis thompson sampling bandit problem arxiv preprint shipra agrawal nikhil devanur lihong contextual bandits global constraints objective arxiv preprint mohammad akbarpour shengwu shayan oveis gharan dynamic matching market design available ssrn ross anderson itai ashlagi david gamarnik yash kanoria dynamic model barter exchange proceedings annual symposium discrete algorithms pages siam baris ata sunil kumar heavy traffic analysis open processing networks complete resource pooling asymptotic optimality discrete review policies annals applied probability audibert munos introduction bandits algorithms theory icml peter auer nicolo paul fischer analysis multiarmed bandit problem machine learning moshe babaioff shaddin dughmi robert kleinberg aleksandrs slivkins dynamic pricing limited supply acm transactions economics computation mariagiovanna baccara sangmok lee leeat yariv optimal dynamic matching available ssrn ashwinkumar badanidiyuru robert kleinberg yaron singer learning budget posted price mechanisms online procurement proceedings acm conference electronic commerce pages acm ashwinkumar badanidiyuru robert kleinberg aleksandrs slivkins bandits knapsacks foundations computer science focs ieee annual symposium pages ieee ashwinkumar badanidiyuru john langford aleksandrs slivkins resourceful contextual bandits proceedings conference learning theory pages omar besbes assaf zeevi dynamic pricing without knowing demand function risk bounds algorithms operations research omar besbes assaf zeevi blind network revenue management operations research bubeck nicolo regret analysis stochastic nonstochastic bandit problems machine learning jim dai positive harris recurrence multiclass queueing networks unified approach via fluid limit models annals applied probability pages ettore damiano ricky lam stability dynamic matching markets games economic behavior sanmay das emir kamenica bandits dating market proceedings international joint conference artificial intelligence pages morgan kaufmann publishers daniel fershtman alessandro pavan dynamic matching experimentation cross subsidization technical report citeseer john gittins kevin glazebrook richard weber bandit allocation indices john wiley sons ming yun zhou dynamic matching market available ssrn sangram kadam maciej kotowski matching technical report harvard university john kennedy school government emilie kaufmann nathaniel korda munos thompson sampling asymptotically optimal analysis algorithmic learning theory pages springer morimitsu kurino credibility efficiency stability theory dynamic matching markets tze leung lai herbert robbins asymptotically efficient adaptive allocation rules advances applied mathematics constantinos maglaras assaf zeevi pricing capacity sizing systems shared resources approximate solutions scaling relations management science constantinos maglaras assaf zeevi pricing design differentiated services approximate analysis structural insights operations research laurent massoulie kuang capacity information processing systems unpublished aranyak mehta online matching allocation theoretical computer science daniel russo benjamin van roy learning optimize via posterior sampling mathematics operations research denis assaf zeevi optimal dynamic assortment planning demand learning manufacturing service operations management lloyd shapley martin shubik assignment game core international journal game theory adish singla andreas krause truthful incentives crowdsourcing tasks using regret minimization mechanisms proceedings international conference world wide web pages international world wide web conferences steering committee zizhuo wang shiming deng yinyu close gaps algorithm revenue management problems operations research appendices proof theorem rest section let quantity defined present convenience reader min recall problem first show following lower bound difference follows directly agrawal proposition lim sup log proof consider following relaxed problem max standard duality argument know optimal policy problem solution theorem agrawal know lim sup log result follows fact let value attained deem optimization problem assuming routing matrix exploitation phase supported prove upper bound difference note difference values two problems difference following result proposition consider sequence policies routing matrix used exploitation phase satisfies lim sup log suppose difficult type pairs lim sup log log constant order prove proposition need following result follows theorem lemma let random variables outcome choosing job type according distribution suppose let lim sup inf log next also need following result lemma let random variables xij let let snj xij let let event snj snj let inf snj depend proof define define snj inf snj thus xij exp nmj exp exp second inequality results hoeffding bound taking proves result proof proposition let denote type worker let denote expected total regret lifetime worker event defined max expected total number times job type allotted worker type policy refer quantity regret rest proof expectations event proof utilize fact log ratio posteriors log random walk probability distribution job types chosen opportunity log log log random variables log independent random variables finite support since take finite values mean note since must case must thus log drift random walk random walk stopped recall log log goal compute upper bound first compute expected regret incurred till end exploration phase algorithm denote find upper bound regret assuming worker performs unbounded number jobs clearly bound holds expected regret end exploration phase worker leaves jobs strategy follows decompose regret till end exploration regret incurred till first time one following two events occurs event log log log log event log log log log followed residual regret depend event occurred first note one two events occur probability compute two different upper bounds depending two different regimes initial posterior distributions different types note posterior probabilities different types observed history sufficient statistic opportunity policy first suppose highest expected regret incurred possible starting posteriors satisfy conditions log log let set starting posteriors satisfy conditions next suppose highest expected regret incurred supremum taken possible starting posteriors satisfy conditions log log let set posteriors satisfy conditions clearly let denote maximum expected regret incurred algorithm till one occurs maximum taken possible starting posteriors satisfy conditions convenience denote event occurs vice versa similarly two events thus sup residual sup residual sup residual sup residual first let find bound easy inf log log log lemma since neither condition satisfied policy guessing phase thus job types utilized positive probability hence condition lemma requirement positive learning rate distinction satisfied also second statement lemma since posteriors ever occurs log finally thus log log sup residual sup residual log log log sup residual sup residual next consider suplr residual depends following two events happens next event log log log log event gets confirmed log log conditional one two events occur probability sup residual sup residual residual lemma follows residual residual regret constant depend see note event starting values log log log random walk crosses lower threshold log log random walks str cross upper threshold log two thresholds job distribution equals hence drift random walks str strictly positive finite argued earlier drift random walks random walk stopped random walks ignored thus conditions lemma satisfied hence time till since regret per unit time bounded deduction follows moving residual inf min thus sup residual sup inf min thus sup residual inf min since suplr next consider suplr residual depends following two events occurs next event log log log log event gets confirmed log log conditional one two events occur probability let maximum expected regret incurred till either occurs given occurred starting likelihoods note exploration phase ends hence residual regret although note str second statement lemma sup residual sup show type str log type first show let maximum expected time taken till either occurs given occurred starting likelihoods clearly since price adjusted payoffs lie let time spent occurred occurs either algorithm guessing phase algorithm confirmation phase guessed type case say algorithm state let event algorithm state time next let time spent occurred occurs algorithm confirmation phase guessed type clearly happen str thus exist case say algorithm state let event algorithm state time clearly supr supr let log observe depends primitives problem algorithm state drift strictly positive algorithm state change consider let opportunity occurred first time clearly log log bounded constant depending problem instance thus algorithm state times opportunity thus observation implies exp standard application concentration inequality thus next consider consider successive returns algorithm state conditional algorithm entered state expected time spent state bounded expected time till guessed type confirmed log lemma conditional probability gets confirmed thus total expected number returns state bounded thus log well thus log sup residual log thus finally log log inf min log log log log log inf min combining two equations deduce log log log inf min log log inf min observed earlier gets confirmed str thus regret exploitation phase worst case order probability otherwise thus total expected regret exploitation phase thus log log inf min thus lemma implies result note difficult type pairs next prove large enough one choose routing matrix exploitation phase deem ensure matches optimize payoffs capacity complementary slackness conditions satisfied proposition suppose generalized imbalance condition satisfied consider optimal routing matrix optimal solution problem policy problem large enough one choose routing matrix satisfies remark construct satisfies order prove proposition need following lemma lemma suppose generalized imbalance condition satisfied consider feasible routing matrix consider job path complete bipartite graph worker types job types following properties one end point job end point job type whose capacity permitted every job type path operating jobs served worker types fully utilized definition since formally consider unassigned worker assigned job type every undirected edge path positive rate jobs routed edge proof consider graph jobs representing nodes one side workers edge job worker consider connected component job type graph suppose includes job type underutilized arrival rate jobs set workers connected component exactly matches total effective service rate sellers connected component contradiction since generalized imbalance holds hence exists underutilized job type reached take path traverse starting terminate first time hits underutilized job type proof proposition recall given routing matrix resulting fraction jobs type directed worker type course proof suppress subscript clearly exist depend guessing confirmation phases particular arises overall routing contribution guessing confirmation phases arise small likelihood worker confirmed type actually type key fact use uniformly bounded let want find call permissible edge bipartite graph workers jobs also note two bullets together willpimply proposition since leads large enough requirement first bullet written set linear equations using write later also column vector elements matrix written columns corresponding dimensions everywhere else expressing left following equation using fact definitions look solution underdetermined set equations specific structure want linear combination flows along paths coming lemma one path written column vector odd edges including edge incident even edges let path matrix desired structure expressed vector flows along paths note deduced fact path one end point worker else job end point system equations reduces since coefficient matrix extremely well behaved different identity deduce system equations unique solution satisfies yields also size supported permissible edges since paths supported permissible edges lemma thus finally obtain possessing desired properties notice permissible edges differs strictly positive values lemma hence also case large enough finally show choice constructed proposition exploitation phase sequence policies asymptotically achieve required upper bound regret proposition suppose generalized imbalance condition satisfied consider sequence policies routing matrix proposed proposition let value attained policy optimization problem lim sup log suppose difficult type pairs lim sup log log constant proof proposition follows policy feasible problem second equality follows fact complementary slackness hence proposition obtain well thus policy feasible gives rate accumulation payoff problem thus result follows proposition computation policy confirmation subphase denoting lem optimization str min redefine obtain linear program min str optimal solution thus note feasible solution exists linear program long multiple solutions choosepthe solution largest learning rate choose solution smallest one way accomplish modify objective minimize small small simply evaluate finite extreme points constrained problems str extreme points set sufficient show always exists finite solution linear program see note feasible finite solution reduced without loss objective maintaining feasibility practical heuristic computed solution following optimization problem min log log redefine obtain linear program practical implementation policies upper confidence bound ucb algorithm popular bandit algorithm embodies well known approach optimism face uncertainty solving problems classical implementation one keeps track highprobability confidence intervals expected payoffs arm step chooses arm highest upper confidence bound highest upper boundary confidence interval precise average reward seen arm pulled times time upper confidence bound mean reward arm given log algorithm chooses arm arg maxj context arms job types jobs already allotted worker average payoff obtained past assignments job number assignments define log current queue length based price job algorithm chooses job type arg assigned worker next note algorithm require knowledge instance primitives thompson sampling thompson sampling another popular bandit algorithm employing bayesian approach problem arm selection description algorithm simple starting prior every step select arm probability equal posterior probability arm optimal posterior probabilities updated based observations made step one incorporate information correlation rewards different arms computing posteriors makes versatile algorithm exploits reward structure multiple settings known give asymptotically tight regret guarantees many bandit problems interest simulations version implemented follows prior probability worker type prior depending worker history posterior distribution type worker computed using knowledge expected payoff matrix worker type sampled distribution suppose type job type arg maxj assigned worker contrast algorithm thompson sampling utilize knowledge expected payoff matrix well arrival rates latter construct starting prior former posterior updates proofs sufficiency policies show policy achieves rate payoff accumulation arbitrarily close maximum possible think fixed throughout section suppose system starts time workers already present arrivals thereafter occur described section consider arbitrary time varying policy let denote derived quantity representing fraction workers type assigned jobs type period largest possible rate payoff accumulation policy long horizons limsupt note ignored effect less workers type present first periods change limiting value also note randomization increase achievable value since one always well picking favorable sample path claim fix policy policy achieves steady state rate payoff accumulation exceeding proof suppress dependence definition know exists increasing sequence times vti construct suitable policy using sufficiently large time sequence let measure workers system history start time abusing notation let measure workers assigned job type time since policy assign jobs arrived period fix think large member sequence average measure workers history present average measure workers assigned job similarly defined denoted immediately averaging times consider worker history assigned job type using known matrix arrival rates infer posterior distribution worker type based hence likelihood job type successfully completed let denote probability success distribution worker simply given bernoulli analysis would similar produce results starting state arbitrary one bounded mass workers already present barring edge effect time caused workers whose history time allows uniquely determine based particular represents bound note ready define policy every policy attempt assign fraction workers history jobs type ignore capacity constraints present find capacity constraints almost satisfied leave choice later choose small choose achieve desired value workers rare histories histories assigned jobs note definition rare histories refers frequency occurrence uniquely specifies well steady state mix workers time particular steady state mass workers history rare bounded using fact subhistories also rare follows max exp histories including rare histories using violation constraint given using fact possible histories follows sum capacity constraint violations across bounded pick arbitrary set workers unmatched get rid capacity violations done remaining within class policies worst case cause payoff loss period remaining worker lifetime thus loss caused need remedy capacity violations bounded per period ignoring capacity violations steady state rate accumulation payoff using fact possible histories let denote true steady state rate accumulation payoff capacity constraints considered combining deduce time chosen member sequence defined beginning proof ensuring hence suffice show hence suffices achieved using log member sequence satisfying yields required bound uniqueness prices generalized imbalance proposition generalized imbalance condition job shadow prices uniquely determined proof proposition dual problem written minimize subject dual variables job prices worker values prove result contradiction suppose multiple dual optima let set dual optima let set jobs prices jobs take multiple values formally takes multiple values similarly let set workers prices workers take multiple values formally takes multiple values immediately deduce exists dual optimum hence capacity constraint job type tight primal deduce worker type assigned job periods assumption suppose left hand side larger right complementary case dealt similarly take primal optimum jobs enough capacity serve workers hence must worker job since must unique optimum value call value let largest smallest values max min complementary slackness know max min max min since must max min thus obtained contradiction proof next proposition shows simple learn exploit strategy achieves regret log follows fact identifiability condition sequence sets converges set appropriately defined distance proposition suppose two rows identical inf lognn proof clear find inner approximation converges appropriate sense goes infinity define approximation suppose learning problem corresponding fixed one starts exploration phase fixed length log job presented worker number times log fixed priori phase type worker becomes known probability error allow relate problem problem user type known suppose phase probability worker type correctly identified probability type note since two rows identical let denote expected number times worker identified type correctly incorrectly directed towards job exploration phase job till job let see one attain following set since express set since log turn log note construction see converges sense log sup inf hence log sup inf well proposition set convex polytope proof purpose proof let show polytope result follow prove using induction argument represent point matrix let worker types labeled let job types labeled clearly convex polytope show convex polytope one well hence result follow decompose assignment problem jobs first job remaining jobs policy jobs problem choice randomization jobs first job depending whether reward obtained chosen job choice point achieved remaining jobs policy gives point suppose randomization chosen job let points chosen achieved job onwards depending job chosen whether reward obtained mapping set policy achieves following point jobs problem diag diag diag diag thus diag diag let matrix ones along column corresponding job type entries set diag diag convex polytope linear combination two convex polytopes followed affine shift easy see convex combination polytopes convex polytope well hence | 8 |
using matching detect infeasibility integer programs mar abstract novel matching based heuristic algorithm designed detect specially formulated infeasible ips presented algorithm input set nested doubly stochastic subsystems set instance defining variables set zero level algorithm deduces additional variables zero level either constraint violated infeasible variables deduced zero undecided feasible ips infeasible ips detected infeasible undecided successfully apply algorithm small set specially formulated infeasible instances hamilton cycle decision problem show model graph subgraph isomorphism decision problems input algorithm increased levels nested doubly stochastic subsystems implemented dynamically algorithm designed parallel processing inclusion techniques addition matching key words integer program matching permutations decision problem msc subject classifications introduction present novel matching based heuristic algorithm deigned detect specially formulated infeasible ips either detects infeasible exits undecided solve call triple overlay matching based closure algorithm algorithm input algorithm whose constraints set nested doubly stochastic boolean subsystems together set instance defining variables set zero level solution set subset set nxn permutation matrices written block permutation matrices block structure algorithm polynomial time search deduces additional variables zero level via matching either constraint violated case infeasible case undecided decided infeasible set variables deduced zero level used test display set violated constraints undecided additional variables deduced zero added nothing concluded infeasible ips may fail detected infeasible yet found feasible ips fall undecided category section present generic required input algorithm view set solutions block permutation matrix whose components variables nxn block nxn permutation matrix block contains position instance modelled setting certain variables zero level sections present algorithm application matching model hamilton cycle decision problem hcp empirical results two conjectures section present generalizations algorithm matching models graph subgraph isomorphism decision problems uses also propose development success effectiveness practicality evaluated comparison algorithms invite researchers collaborate contact corresponding author fortran code ideas presented paper originated polyhedral model cycles graphs time thought recognize birkhoff polytope image solution set compact formulation graphs accomplished part goal paper convex hull excluded permutations infeasible ips birkhoff polytope easy build compact formulation paper graphs ranging vertices correctly decided infeasible ips none failed reported although counterexamples surely exist believe insightful theory discovered explains early successes specially constructed ips terminology imagine integer program modelled solution integer program feasible matching also imagine arbitrary set instance defining constraints form obvious apply matching help solution imagine create compact formulation whose solution set isomorphic equal orthogonal projection convert linear constraint instantiated discrete states via creation set discrete variables becomes easy exploit matching hence algorithm university guelph canada email gismondi corresponding author british columbia canada email ted dedicate paper late pal fischer friend colleague mentor kelowna code instance defining constraints set two distinct components interchangeably playing role variable create instance creating instance exclusion set whose elements set exists satisfying solution otherwise satisfies least one excluded solution set view elements coding precisely set permutation matrices excluded solution set excludes union sets set satisfying example modelling technique needed create presented section originally presented exclude permutation matrices setting complement exclusion set respect called available set feasible exists whose set distinct pairs components satisfy define said covered exists subset defines participates cover definition clos closed exclusion set set participating cover note code set permutation matrices clearly clos permutation matrices accounted covered empty definition open open available set complement clos set participating cover least one theorem infeasible open system pall assign visualize system form permutation matrix blocks block contains position remaining entries row column zero rest entries block form assumed variables initialized henceforth present algorithm terms matrix see figure example general form matrix set nxn permutation matrices written matrix block form set integer extrema solution set system see figure example integer solution system matrix form fig general form matrix integer solution system exists nxn permutation matrix block form fig integer solution system matrix form triple overlay matching based closure first present overview algorithm followed formal algorithm let given encode create overview triple overlay matching based closure algorithm rather search existence covered attempt shrink participates cover least one algorithm deduces participate cover removes adds success depends upon whether true infeasible ips initialize via sufficient deduce open impossible feasible yield open infeasible ips cause algorithm either deduce infeasibility exit undecided say undecided although deduce participate cover known deduce brief details algorithm deduces variables zero level every solution follow algorithm systematically tests set necessary conditions assuming feasible time set unit level blocks assumed cover match necessary condition existence block permutation matrix solution rather test match covered two blocks exhaust choices third variable common blocks set unit level test existence match covered three blocks exhausting possible choices variable match exists given variable deduced zero otherwise conclude nothing cases continue next variable yet deduced zero eventually variables deduced zero none constraints appear violated undecided enough variables deduced zero constraint violated infeasible triple overlay matching based closure algorithm interchangeably associate matrix matrix entries zero level matrix entries zero level unit entries matrix entries reference variables unit entry uth row ith column block represents variable remaining unit entries row column block regarded representing variables really represent case solution also regarded representing variables think associated matrix terms patterns cover block permutation matrices exploit matching definition match logical function input nxn matrix row labels viewed vertices set column labels viewed vertices set match earlier work create equivalence class set possible none cover whose class representative hence term triple overlay every variable deduced zero participates match overlay three blocks exists quadrupal quintuple overlay exhaustion algorithm tests factorial numbers sufficient overlays match returns true exists match otherwise match returns false definition overlay binary function applied two nxn matrices output matrix loosely use terms double triple overlay place overlay overlay overlay etc definition check rowscolumns routine returns true row column matrix case algorithm terminates graph deduced infeasible otherwise check rowscolumns returns false fortran implementation algorithm testing termination also implement boolean closure within blocks efficiently deduces components zero level note significant speed increases note boolean closure check rowscolumns replaced temporarily set nonzero component matrix unit level check infeasibility subject doubly stochastic constraints matrix infeasibility implies component set zero level whenever algorithm exits undecided every exists match triple overlay blocks least one block deduced infeasible call corresponding matrix triple overlay closure otherwise algorithm exits deduced infeasible open deduced empty input open output open decision check rowscolumns exit open infeasible continue triple closure oldq open open end check rowscolumns exit open infeasible next end overlay open open next end doubleoverlay overlay triple closure doubleoverlayw overlay doubleoverlay doubleoverlay triple closure end end doubleoverlay open open end end end oldq continue triple closure exit open undecided algorithm triple overlay matching based closure algorithm application hcp let vertex graph also referenced adjacency matrix model hcp simple connected graphs others called background information classification graphs well known decision problem edge hamiltonian since graphs either follows graphs initially studied peter tait named snarks martin gardner tait conjectured every planar graph hamilton cycle later disproved tutte via construction vertex counterexample significant conjecture true implied famous theorem ideas summarized figure simple connected graphs hamiltonian graphs graphs tutte counterexample snarks fig classification simple connected graphs matching model hcp regard paths length start stop vertex pass every vertex directed graphs vertices undirected graphs every cycle accompanied companion cycle matter hamiltonian nonhamiltonian assign vertex origin terminal vertex cycles assign directed hamilton cycle correspondence nxn permutation matrix ith arc cycle enters vertex encode cycle permutation vertex labels example path sequence code first arc enters vertex second arc enters vertex since cycles definition sufficient code cycles nxn permutation matrices note arc directed edge undirected pair arcs edge unless otherwise stated graphs simple connected next encode graph instance examining adjacency matrix adding pairs components encode paths length vertex vertex cycles encodes precisely set cycles every cycle uses least one arc see algorithm initialize exclusion set recall connected arc assign also compute additional whenever possible account paths length vertex vertex implementing dijkstras algorithm equally weighted arcs find minimal length paths pairs vertices coded return path exists account paths length one arcs paths length two temporarily deleting arc adjacent vertices begin follows adjacent temporarily delete arc apply dijkstras algorithm discover minimal path length simple paths length exist discovered adjacent correspond arcs cycles correspond paths length cycles accounting arcs sufficient model precisely cycles account paths cycles bolster two special cases arise case last arc cycle recall every arc cycle enters vertex definition therefore observe arcs temporarily deleted otherwise noting corresponding sets cycles encoded permutation matrices nth arc cycle enters vertex case adjacent dijkstras algorithm returns dijkstras algorithm returns adjacent set paths length two exist sets cycles encoded permutation matrices arc cycle enters vertex continuing way encode possible arcs cycles paths length enter vertex case first arc cycle recall every first arc every cycle exits vertex observe code arcs cycles paths length coding possible arcs enter vertex general case exclusion set constructed noting cycle uses least one arc complete set permutation matrices corresponding cycles characterized added indexing arc play role sequence positions disjoint sets cycles considering arcs playing role possible sequence positions possible construct set permutation matrices corresponding set cycles accounted union added generalize idea via dijkstras algorithm account sets paths length recall strongly connected arc temporarily deleted possible path exist given pair vertices useful information indicates arc essential assumption existence hamilton cycle uses arc case implies particular necessary integrality must unit level every assignment variables assuming graph hamiltonian deduced otherwise ever thus row column set zero level accounted initialize recall case dijkstras algorithm returns minimal path loop appends necessary set effectively setting variables blocks zero level implemented algorithm must attain unit level via double stochastity implies column deduced zero level similarily case general case also possible path exist given pair vertices arc temporarily deleted assumption existence hamilton cycle arc essential play role sequence position case complementary row column assigned implemented single variable remains row therefore equated block variable via scaled double stochastity within block rows columns block sum complementary variables corresponding column therefore set block thus essential arcs also contribute new information adding complementary row column finally encode matrix assign create input arc adjacency matrix output case arc dijkstrasalgorithm arc arc end arc end case arc dijkstrasalgorithm arc arc end arc end general case arc dijkstrasalgorithm arc arc end arc end end exit algorithm initialize exclusion set empirical results two conjectures table lists details applications graphs algorithm table lists details applications mostly graphs earlier version matching based closure algorithm called subset applications algorithms graphs decided application either algorithm graphs failed reported empirical results tables heading count variables size initial available set number components initializing implementing algorithm note count distinct table heading refers upper bound selected graphs modified include cycle simply observe open two graphs also hypohamiltonian count parentheses upper bound removing vertex wca two conjectures distinct exists conjecture polynomial sized proof membership simple connected graphs conjecture triple overlay matching based closure deduces open simple connected graphs wca closure exhausting middle loop returning label continue triple closure followed triple closure also applied exhausting interior loop returning label triple closure many applications boolean closure across many intermediate steps also implemented unlike triple overlay matching based closure presented although checks also included block overlays also restricted form way solve problems vertex range wca designed parallelized fortran code written distributed computing table applications triple overlay matching based closure algorithm name graph vertices graphs petersen snark flower snarks tietzs snark blanusa snarks house graphs loupekine snark goldberg snark house graphs jan goedgebeur snark snarks house graphs double star snark table applications matching based closure algorithm wca name graph petersen snark herschel graph kleetope matteo coxeter house graphs snark zamfirescu snark hypohamiltonian grinberg graph szekeres snark watkins snark thomassen meredith flower snark goldberg snark vertices edges run yet run yet run yet run yet run yet run yet run yet run yet run yet simple connected hypohamiltonian confirmed existence open removing vertex wca historical note ignoring planarity condition tait conjecture matteo graph smallest counterexample graph smallest planar counterexample tait conjecture tutte graph larger counterexample also note georges graph smallest counterexample tutte conjecture horton graph first counterexample tutte conjecture discussion practical generalizations algorithm algorithm designed invoke arbitrary levels overlay adaptive strategies change level overlay depth desired needed deduce variables zero level order make use increased overlay necessary add variables retain information tests matching example create quadrupal overlay version algorithm introduce variables redefine system matrix terms triply nested birkhoff polyhedra see discussion description polyhedra feasible regions formulations relaxed ips exists sequence feasible regions correspondence increasing levels nested birkhoff polyhedra whose end feasible region convex hull set integer extrema system see discussion inequalities term closure far reserved deducing variables added invoking algorithm polynomial time techniques used deduce variables zero level example prior matching could implement maximize variable system maximum less unit level variable set zero implementation use boolean closure see details also note exist entire conferences devoted matching preferences perhaps many innovative heuristics exist included algorithm algorithm designed parallel processing variable yet deduced zero tested independent others making copy matrix implementing algorithm independent process deduces variable zero level simply update corresponding variable across processes applications exist model specific dependencies variables undirected hcp implies way account companion cycles study algorithm exclusion set focus study propose classify different pattern remains matrix exit algorithm isomorphism covers set possible solutions would useful know kinds cause algorithm generate minimal cover since follows algorithm would decide feasibility even exist classes infeasible ips provably exit algorithm infeasible matter minimal cover still follows algorithm decides feasibility plan investigate counterexamples via matching model hcp graph fails earlier version algorithm convert study instance two matching model applications input algorithm present two matching models applications components longer interpretation sequenced arcs cycle instead let block permutation matrix whose blocks mxm permutation matrices note subgraph exists permutation matrix covers add covers column vectors adjacency matrices formatted model graph subgraph isomorphism decision problems matching models single difference case graph isomorphism information appears added first note covers means required place ones positions equations subset row components sum one implying complement row components must therefore set zero level add completes subgraph isomorphism matching model part graph isomorphism model graph isomorphism cover means equality remaining equations satisfied required place zeroes positions equations subset row components sum zero implying row components must therefore set zero level add completes graph isomorphism matching model applications algorithm originally intended algorithm decide feasibility matching model decides infeasibility algorithm served purpose otherwise known model feasible infeasible note open refined cover possible solutions believe useful propose algorithm developed see information modelling techniques part search based algorithms either provide refined information prior search incorporated updated alongside search based algorithm provide information search one last thought academic use algorithm suppose given correctly guessed infeasible algorithm exits undecided attribute failure lacking necessary right kind could induce closure could theoretically augment additional deduce infeasibility discover extra information needed generate open application algorithm gets stuck open simply augment additional open test open becomes empty might difficult guess minimal sized sets additional guessed articulated critical information needed solve problem course known additional efficiently computed validated members see conjecture acknowledgements dedication thank adrian lee preparing running examples presented tables nicholas swart testing implementing graphs catherine bell suggestions contributions early project dedicate paper late pal fischer ted pal colleague friend gismondi pal taught analysis understanding convex polyhedra later became colleague ted already miss much references brinkmann coolsaet goedgebeur melot house graphs database interesting graphs discrete applied mathematics available http demers gismondi enumerating facets util ejov haythorpe rossomakhine conversion hcp australasian journal combinatorics mathematics stack exchange string sspp retrieved may http filar haythorpe rossomakhine new heuristic detecting cubic graphs computers operations research gary johnson tarjan planar hamilton circuit problem siam gismondi subgraph isomorphism hamilton tour decision problem using linearized form util modelling decision problems via birkhoff polyhedra journal algorithms computation gismondi swart model tour decision problem math prog ser haythorpe fhcp challenge set retrieved july http microsoft research lab new england cambridge usa https swart gismondi swart bell lee deciding graph via closure algorithm journal algorithms computation wolfram math world graph retrieved may http | 8 |
nov learning hierarchical information flow recurrent neural modules danijar hafner google brain mail alex irpan google brain alexirpan james davidson google brain jcdavidson nicolas heess google deepmind heess abstract propose thalnet deep learning model inspired neocortical communication via thalamus model consists recurrent neural modules send features routing center endowing modules flexibility share features multiple time steps show model learns route information hierarchically processing input data chain modules observe common architectures feed forward neural networks skip connections emerging special cases architecture novel connectivity patterns learned compression task model outperforms standard recurrent neural networks several sequential benchmarks introduction deep learning models make use modular building blocks fully connected layers convolutional layers recurrent layers researchers often combine strictly layered ways instead prescribing connectivity priori method learns route information part learning solve task achieve using recurrent modules communicate via routing center inspired thalamus warren mcculloch walter pitts invented perceptron first mathematical model neural information processing laying groundwork modern research artificial neural networks since researchers continued looking inspiration neuroscience identify new deep learning architectures efforts directed learning biologically plausible mechanisms attempt explain brain behavior interest achieve flexible learning model neocortex communication areas broadly classified two pathways direct communication communication via thalamus model borrow latter notion centralized routing system connect specializing neural modules experiments presented model learns form connection patterns process input hierarchically including skip connections known resnet highway networks densenet feedback connections known play important role neocortex improve deep learning learned connectivity structure adapted task allowing model computational width depth paper study properties goal building understanding interactions recurrent neural modules work done internship google brain conference neural information processing systems nips long beach usa module receives task input used side computation trained auxiliary task produces output main task computation modules unrolled time one possible path hierarchical information flow highlighted green show model learns hierarchical information flow skip connections feedback connections section figure several modules share learned features via routing center dashed lines used dynamic reading define static dynamic reading mechanisms section section defines computational model point two critical design axes explore experimentally supplementary material section compare performance model three sequential tasks show consistently outperforms recurrent networks section apply best performing design language modeling task observe model automatically learns hierarchical connectivity patterns thalamus gated recurrent modules find inspiration work neurological structure neocortex areas neocortex communicate via two principal pathways comprises direct connections nuclei comprises connections relayed via thalamus inspired second pathway develop sequential deep learning model modules communicate via routing center name proposed model thalnet model definition system comprises tuple computation modules route respective features shared center vector example instance thalnet model shown figure every time step module reads center vector via context input cit optional task input xit features cit xit module produces directed center output modules additionally produce task output feature vector function modules send features routing center merged single feature vector experiments simply implement concatenation next time step center vector read selectively module using reading mechanism obtain context input reading mechanism allows modules read individual features allowing complex selective reuse information modules initial center vector zero vector practice experiment feed forward recurrent implementations modules simplicity omit hidden state used recurrent modules notation reading mechanism conditioned separately merging preserve general case figure thalnet model perspective single module example module receives input produces features center output context input determined linear mapping center features previous time step practice apply weight normalization encourage interpretable weight matrices analyzed section summary thalnet governed following equations module features cit xit module output yti center features read context input choice input output modules depends task hand simple scenario single task exactly one input module receiving task input number side modules exactly one output module producing predictions output modules get trained using appropriate loss functions gradients flowing backwards fully differentiable routing center modules modules operate parallel reads target center vector previous time step unrolling process seen figure figure illustrates ability arbitrarily route modules time steps suggest sequential nature model even though application static input possible allowing observing input multiple time steps hypothesize modules use center route information chain modules producing final output see section tasks require producing output every time step repeat input frames allow model process multiple modules first producing output communication modules always spans time reading mechanisms discuss implementations reading mechanism modules defined section draw distinction static dynamic reading mechanisms thalnet static reading conditioned independent parameters dynamic reading conditioned current corresponding module state allowing model adapt connectivity within single sequence investigate following reading mechanisms linear mapping simplest form static reading consists fully connected layer weights illustrated figure approach performs reasonably well exhibit unstable learning dynamics learns noisy weight matrices hard interpret regularizing weights using penalties help since cause side modules get read anymore weight normalization found linear mappings weight normalization paw rameterization effective context input computed scaling factor weights euclidean matrix norm please refer graves study similar approach normalization results interpretable weights since increasing one weight pushes less important weights closer zero demonstrated section fast softmax achieve dynamic routing condition reading weight matrix current module features seen form fast weights providing biologically plausible method attention apply softmax normalization computed weights element context computed weighted average center elements rather weighted sum specifically weights biases allows different connectivity pattern time step introduces learned parameters per module fast gaussian compact parameterization dynamic routing consider choosing context element gaussian weighted average mean variance vectors learned conditioned context input computed weights biases gaussian density function density evaluated index based distance mean reading mechanism requires parameters per module thus makes dynamic reading practical reading mechanisms could also select modules high level instead individual feature elements explore direction since seems less biologically plausible moreover demonstrate knowledge feature boundaries necessary hierarchical information flow emerges using routing see figure theoretically also allows model perform wider class computations performance comparison investigate properties performance model several benchmark tasks first compare reading mechanisms module designs simple sequential task obtain good configuration later experiments please refer supplementary material precise experiment description results find weight normalized reading mechanism provides best performance stability training use thalnet models four modules configuration experiments section explore performance thalnet conduct experiments three sequential tasks increasing difficulty sequential permuted mnist use images mnist data set pixels every image fixed random permutation show model sequence rows model outputs prediction handwritten digit last time step must integrate remember observed information previous rows delayed prediction combined permutation pixels makes task harder static image classification task recurrent neural network achieving test error use standard split training images testing images sequential similar spirit use data set feed images model row row flatten color channels every row model observes vector elements every time step classification given observing last row image task difficult mnist task image show complex often ambiguous objects data set contains training images testing images language modeling text corpus consisting first bytes english wikipedia commonly used language modeling benchmark sequential models every time step model observes one byte usually corresponding character encoded vector length task predict distribution next character sequence performance measured bits per character bpc computed following cooijmans train first evaluate performance following corpus two image classification tasks compare variations model stacked gated recurrent unit gru network layers baseline variations compare different sequential testing sequential permuted mnist testing epochs thalnet thalnet thalnet gru gru baseline thalnet thalnet sequential permuted mnist training bits per character bpc thalnet thalnet thalnet thalnet thalnet gru gru baseline epochs sequential training thalnet thalnet thalnet thalnet thalnet gru gru baseline epochs accuracy gru baseline thalnet gru thalnet thalnet thalnet thalnet epochs gru step gru steps thalnet steps epochs language modeling training bits per character bpc accuracy language modeling evaluation accuracy accuracy thalnet steps gru step gru steps epochs figure performance permuted sequential mnist sequential cifar language modeling tasks stacked gru baseline reaches higher training accuracy cifar fails generalize well tasks thalnet clearly outperforms baseline testing accuracy cifar see recurrency within modules speeds training pattern shows experiment thalnet using parameters matches performance baseline parameters step number refers repeated inputs discussed section smooth graphs using running average since models evaluated testing batches rolling basis choices layers gru layers implementing modules test two fully connected layers gru layer gru fully connected followed gru gru followed fully connected gru sandwiched fully connected layers models pick largest layer sizes number parameters exceed training performed epochs batches size using rmsprop learning rate language modeling simulate thalnet steps per token described section allow output module read information current input making prediction note task model uses half capacity directly since side modules integrate dependencies previous time steps run baseline without extra steps steps per token allowing apply full capacity twice token respectively makes comparison bit difficult favouring baseline suggests architectural modifications explicit modules could improve performance task requires larger models train thalnet modules size feed forward layer size gru layer totaling million model parameters compare standard baseline language modeling single gru units totaling million parameters train batches sequences containing bytes using adam optimizer default learning rate scale gradients exceeding norm results epochs training shown figure training took days thalnet steps per token days baseline steps per token days baseline without extra steps figure shows training testing training curves three tasks described section thalnet outperforms standard gru networks three tasks interestingly thalnet experiences note modules require amount local structure allow specialize implementing modules single fully connected layer recovers standard recurrent neural network one large layer much smaller gap training testing performance baseline trend observed across experimental results task thalnet scores bpc using parameters gru baseline scores bpc using parameters lower better model thus slightly improves baseline using fewer parameters result places thalnet baseline regularization methods designed language modeling also applied model baseline performance consistent published results lstms similar number parameters hypothesize information bottleneck reading mechanism acting implicit regularizer encourages generalization compared using one large rnn lot freedom modeling mapping thalnet imposes local structure mapping implemented particular encourages model decompose several modules stronger thus extend every module needs learn computation hierarchical connectivity patterns using routing center model able learn structure part learning solve task section explore emergent connectivity patterns show model learns route features hierarchical ways hypothesized including skip connections feedback connections purpose choose corpus language modeling benchmark consisting first bytes wikipedia preprocessed hutter prize model observes one encoded byte per time step trained predict future input next time step use comparably small models able run experiments quickly comparing thalnet models modules layer sizes experiments use weight normalized reading focus exploring learned connectivity patterns show competitive results task using larger models section simulate two sub time steps allow output module receive information current input frame discussed section models trained epochs batches size containing sequences length using rmsprop learning rate general observe different random seeds converging similar connectivity patterns recurring elements trained reading weights figure shows trained reading weights various reading mechanisms along connectivity graphs manually image represents reading weight matrix modules top bottom pixel row shows weight factors get multiplied produce single element context vector module weight matrices thus dimensions white pixels represent large magnitudes suggesting focus features positions weight matrices weight normalized reading clearly resemble boundaries four concatenated module features center vector even though model notion origin ordering elements center vector similar structure emerges fast softmax reading weight matrices sparser weights weight normalization course sequence observe weights staying constant others change magnitudes time step suggests optimal connectivity might include static dynamic elements however reading mechanism leads less stable training problem could potentially alleviated normalizing fast weight matrix fast gaussian reading see distributions occasionally tighten specific features first last modules modules receive input emit output modules learn large variance parameters effectively spanning center features could potentially addressed reading using mixtures gaussians context element instead generally find weight normalized fast softmax reading select features targeted way developing formal measurements deduction process seems beneficial future skip connection skip connection feedback connection skip connection weight normalization feedback connection fast softmax fast gaussian figure reading weights learned different reading mechanisms modules language modeling task alongside manually deducted connectivity graphs plot weight matrices produce context inputs four modules top bottom top images show focus input modules followed side modules output modules bottom pixel row gets multiplied center vector produce one scalar element context input visualize magnitude weights percentile include connectivity graph fast gaussian reading reading weights clearly structured commonly learned structures top row figure shows manually deducted connectivity graphs modules arrows represent main direction information flow model example two incoming arrows module figure indicate module mainly attends features produced modules infer connections larger weight magnitudes first third quarters reading weights module bottom row typical pattern emerges experiments seen connectivity graphs weight normalized fast softmax reading figures namely output module reads features directly input module direction connection established early training likely direct gradient path output input later side modules develop useful features support input output modules another pattern one module reads modules combines information figure module takes role reading modules distributing features via input module additional experiments four modules observed pattern emerge predominantly connection pattern provides efficient way information sharing modules connectivity graphs figure include hierarchical computation paths modules include learn skip connections known improve gradient flow popular models resnet highway networks densenet furthermore connectivity graphs contain backward connections creating feedback loops two modules feedback connections known play critical role neocortex inspired work related work describe recurrent mixture experts model learns dynamically pass information modules related approaches found various recurrent methods outlined section modular neural networks thalnet consists several recurrent modules interact exploit modularity common property existing neural models learn matrix tasks robot bodies improve multitask transfer learning learn modules modules specific objects present scene selected object classifier approaches specify modules corresponding specific task variable manually contrast model automatically discovers exploits inherent modularity task require correspondence modules task variables column bundle model consists central column several around applied temporal data observe structural similarity modules case weights shared among layers authors mention possibility learned computation paths learn connectivity modules alongside task various methods context also connectivity modules fernando learn paths multiple layers experts using evolutionary approach rusu learn adapter connections connect fixed previously trained experts exploit information approaches focus architectures recurrency approach allows complex flexible computational paths moreover learn interpretable weight matrices examined directly without performing costly sensitivity analysis neural programmer interpreted presented reed freitas related dynamic gating mechanisms work network recursively calls parameterized way perform computations comparison model allows parallel computation modules unrestricted connectivity patterns modules memory augmented rnns center vector model interpreted external memory multiple recurrent controllers operating preceding work proposes recurrent neural networks operating external memory structures neural turing machine proposed graves work investigate differentiable ways address memory reading writing thalnet model use multiple recurrent controllers accessing center vector moreover center vector recomputed time step thus confused persistent memory typical model external memory conclusion presented thalnet recurrent modular framework learns pass information neural modules hierarchical way experiments sequential permuted variants mnist promising sign viability approach experiments thalnet learns novel connectivity patterns include hierarchical paths skip connections feedback connections current implementation assume center features vector introducing matrix shape center features would open ways integrate convolutional modules similaritybased attention mechanisms reading center matrix shaped features easily interpretable visual input less clear structure leveraged modalities direction future work apply paradigm tasks multiple modalities inputs outputs seems natural either separate input module modality multiple output modules share information center believe could used hint specialization specific patterns create controllable connectivity patterns modules similarly interesting direction explore proposed model leveraged learn remember sequence tasks believe modular computation neural networks become important researchers approach complex tasks employ deep learning rich domains work provides step direction automatically organizing neural modules leverage order solve wide range tasks complex world references andreas rohrbach darrell klein neural module networks ieee conference computer vision pattern recognition pages hinton mnih leibo ionescu using fast weights attend recent past advances neural information processing systems pages cho van bahdanau bengio properties neural machine translation approaches syntax semantics structure statistical translation page cooijmans ballas laurent courville recurrent batch normalization arxiv preprint devin gupta darrell abbeel levine learning modular neural network policies transfer arxiv preprint fernando banarse blundell zwols rusu pritzel wierstra pathnet evolution channels gradient descent super neural networks arxiv preprint gilbert sigman brain states influences sensory processing neuron graves adaptive computation time recurrent neural networks arxiv preprint graves wayne danihelka neural turing machines arxiv preprint graves wayne reynolds harley danihelka colmenarejo grefenstette ramalho agapiou hybrid computing using neural network dynamic external memory nature hawkins george hierarchical temporal memory concepts theory terminology technical report numenta zhang ren sun deep residual learning image recognition ieee conference computer vision pattern recognition pages hinton krizhevsky wang transforming artificial neural networks machine learning icann pages hochreiter schmidhuber long memory neural computation huang liu weinberger van der maaten densely connected convolutional networks arxiv preprint jacobs jordan barto task decomposition competition modular connectionist architecture vision tasks cognitive science kingma adam method stochastic optimization international conference learning representations kirkpatrick pascanu rabinowitz veness desjardins rusu milan quan ramalho overcoming catastrophic forgetting neural networks proceedings national academy sciences page krizhevsky learning multiple layers features tiny images krueger maharaj pezeshki ballas goyal bengio larochelle courville zoneout regularizing rnns randomly preserving hidden activations arxiv preprint lecun cortes mnist database handwritten digits lillicrap cownden tweed akerman random synaptic feedback weights support error backpropagation deep learning nature communications mahoney test data http mcculloch pitts logical calculus ideas immanent nervous activity bulletin mathematical biophysics pham tran venkatesh one size fits many column bundle learning arxiv preprint reed freitas neural international conference learning representations rusu rabinowitz desjardins soyer kirkpatrick kavukcuoglu pascanu hadsell progressive neural networks arxiv preprint salimans kingma weight normalization simple reparameterization accelerate training deep neural networks advances neural information processing systems pages schmidhuber learning control memories alternative dynamic recurrent networks neural computation shazeer mirhoseini maziarz davis hinton dean outrageously large neural networks layer arxiv preprint sherman thalamus plays central role ongoing cortical functioning nature neuroscience srivastava greff schmidhuber highway networks arxiv preprint tieleman hinton lecture divide gradient running average recent magnitude coursera neural networks machine learning zenke poole ganguli improved multitask learning synaptic intelligence arxiv preprint supplementary material learning hierarchical information flow recurrent neural modules module designs reading mechanisms sequential mnist testing thalnet thalnet thalnet gru baseline thalnet gru thalnet epochs accuracy accuracy sequential mnist testing module designs thalnet weight norm thalnet linear gru baseline thalnet fast softmax epochs reading mechanisms figure test performance sequential mnist task grouped module design left reading mechanism right plots show top median bottom accuracy design choices recurrent modules train faster pure fully connected modules weight normalized reading stable performs best modules perform similarly limiting size center use sequential variant mnist compare reading mechanisms described section along implementations module function sequential mnist model observes handwritten digits pixels top bottom one row per time step prediction given last time step model integrate remember observed information sequence makes task challenging static setting recurrent network achieving error task implement modules test various combinations fully connected recurrent layers gated recurrent units gru modules require amount local structure allow test two fully connected layers gru layer gru fully connected followed gru gru followed fully connected gru sandwiched fully connected layers addition compare performance stacked gru baseline layers models pick largest layer sizes number parameters exceed train epochs batches size using rmsprop learning rate figure shows test accuracy module designs reading mechanisms thalnet outperforms stacked gru baseline configurations assume structure imposed model acts regularizer perform performance comparison section results module designs shown figure appendix observe benefit recurrent modules exhibit faster stable training fully connected modules could explained fact pure fully connected modules learn use routing center store information time long feedback loop fully connected layer recurrent layer also significantly improves performance fully connected layer gru let produce compact feature vectors scale better large modules although find beneficial later experiments section implementing modules single fully connected layer recovers standard recurrent neural network one large layer results reading mechanisms area shown figure reading mechanism small impact model performance find weight normalized reading yield stable performance linear fast softmax reading experiments use weight normalized reading due stability predictive performance include results fast gaussian reading performed performance range methods interpretation recurrent mixture experts thalnet route information input output multiple time steps enables trade shallow deep computation paths understand view thalnet smooth mixture experts model modules recurrent experts module outputs features center vector linear combination read next time step effectively performs mixing expert outputs compared recurrent mixture experts model presented shazeer model recurrently route information mixture multiple times increasing number mixture compounds highlight two extreme cases modules could read identical locations center case model wide shallow computation time step analogous graves extreme module reads different module recovering hierarchy recurrent layers gives deep narrow computation stretched multiple time steps exist spectrum complex patterns information flow differing dynamic computation depths comparable densenet also blends information paths different computational depth although purely model using modules model could still leverage recurrence modules center store information time however bounds number distinct computation steps thalnet could apply input using recurrent modules computation steps change time increasing flexibility model recurrent modules give stronger prior using feedback shows improved performance experiments comparison long memory viewing equations model definition section one might think model compares long memory lstm however exists limited similarity two models empirically observed lstms performed similarly gru baselines given parameter budget lstm context vector processed thalnet routing center modules lstm hidden output better candidate comparison thalnet center features allows relate recurrent weight matrix lstm layer linear version reading mechanism could relate thalnet module set multiple lstm units however lstm units perform separate scalar computations modules learn complex interactions multiple features time step alternatively could see lstm units small thalnet modules reading exactly four context elements namely input three gates however computational capacity local structure individual lstm units comparable thalnet modules used work | 2 |
towards efficient abstractions concurrent may carlo vasileios trinity college dublin ireland spaccasc abstract consensus often occurring problem concurrent distributed programming present programming language simple semantics support consensus form communicating transactions motivate need construct characteristic example generalized consensus naturally encoded language focus challenges achieving implementation efficiently run programs setup architecture evaluate different implementation alternatives use experimentally evaluate runtime heuristics basis research project realistic programming language support consensus keywords concurrent programming consensus communicating transactions introduction achieving consensus concurrent processes ubiquitous problem multicore distributed programming among classic instances consensus leader election synchronous communication programming language support consensus however limited example cml communication primitives provide programming language abstraction implement consensus however used abstractly implement consensus three processes thm needs implemented basis let consider hypothetical scenario generalized consensus call saturday night sno problem scenario number friends seeking partners various activities saturday night list desired activities attend certain order agree night partner activity alice example looking company dinner movie necessarily person find partners events order may attempt synchronize handshake channels dinner movie student project paper primarily work first author supported msr mrl supported sfi project sfi def alice sync dinner sync movie sync synchronization operator similar csp synchronization bob hand wants dinner dancing def bob sync dinner sync dancing alice bob agree dinner need partners movie dancing respectively commit night agreement tentative let carol another friend group interested dancing def carol sync dancing bob carol agree dancing happy commit going however alice movie partner still cancel agreement bob happens bob carol need notified cancel agreement everyone starts search partners implementation sno scenario concurrent processes would need specialized way reversing effect synchronization suppose david also participant set friends def david sync dancing sync movie partial agreement alice bob carol canceled david together first two synchronize dinner dancing movie agree leaving carol home notice alice raised objection agreement forming bob carol three participants forced restart however carol taken agreement even bob happy commit plans david would able take carol place work alice bob point carol joined would need repeated programming sno arbitrary number processes form multiple agreement groups cml complicated especially consider participants allowed perform arbitrary computations synchronizations affecting control flow communicate parties directly involved sno example bob may want dancing agree babysitter stay late def bob sync dinner babysitter sync dancing case bob computation outside sno group processes implement would require code dealing sno protocol written babysitter process breaking potential modular implementation paper shows communicating transactions recently proposed mechanism automatic error recovery ccs processes useful mechanism modularly implementing sno generalized consensus scenarios previous work communicating transactions focused behavioral theory unit bool int chan true false fun let else send recv newchana spawn atomic commit fst snd add sub mul leq let else send send recv spawn var chan fig tcml syntax respect safety liveness however effectiveness construct pragmatic programming language yet proven one main milestones achieve direction invention efficient runtime implementations communicating transactions describe challenges first results recently started project investigate research direction particular equip simple concurrent functional language communicating transactions use discuss challenges making efficient implementation languages sect also use language give modular implementation consensus scenarios sno example simple operational semantics language allows communication sno processes arbitrary processes babysitter process without need add code sno protocol processes moreover efficient partially aborting strategy discussed captured semantics semantics language allowing different runtime scheduling strategies processes efficient others study relative efficiency developed skeleton implementation language allows plug evaluate runtime strategies sect describe several strategies sect report results evaluations sect finally summarize related work area future directions project sect tcml language study tcml language combining communicating transactions language use abstract syntax shown fig usual abbreviations values tcml either constants base type unit bool int pairs values type recursive functions channels carrying values type chan simple type system appropriate progress true else false else let fun fun step spawn spawn newchan newchana atomic atomic commit commit let app fig sequential reductions preservation theorems found accompanying technical report omitted source tcml programs expressions functional core language ranged whereas running programs processes derived syntax besides standard lambda calculus expressions functional core contains constructs send recv synchronously send receive value channel respectively newchana create new channel type chan constructs spawn atomic executed respectively spawn new process transaction commit commits transaction shortly describe constructs detail simple running process expression also constructed parallel composition treat free channels considering global thus channel free used communication processes construct encodes restriction scope process use barendregt convention bound variables channels identify terms alpha conversion moreover write free channels process write process encoding communicating transaction thought process default transaction runs transaction commits however transaction aborts discarded entire transaction replaced alternative process intuitively continuation transaction case abort explain commits asynchronous requiring addition process language name transaction bound thus default transaction potentially spawn ftn gives free transaction names processes free variables reduce using transitions form transitions functional part language shown fig defined terms reductions redex eager evaluation contexts whose grammar given fig due unique decomposition lemma expression decomposed evaluation context redex expression one way use standard substitution returning result operator defined rule step lifts functional reductions process reductions rest reduction rules fig deal concurrent transactional expressions rule spawn reduces spawn expression evaluation position unit value creating new process running application type system language guarantees value thunk rule derive reductions spawn send recv send recv send recv resulting processes reductions communicate channel previously mentioned free channel also used communicate parallel process rule newchan gives processes ability create new locally scoped channels thus following expression result input output process communicate let newchanint spawn send recv spawn send recv send recv rule atomic starts new transaction current process engulfing entire process storing abort continuation alternative transaction rule commit spawns asynchronous commit transactions arbitrarily nested thus write atomic spawn recv commit atomic recv commit reduces recv commit recv commit atomic recv commit process commit input channel inner input see transaction aborts inner discarded even performed input resulting process alternative restart atomic recv commit effect abort rollback communication reverting program consistent state process transactional reductions handled rules fig first four rules sync par chan direct adaptations reduction rules allow parallel processes communicate propagate reductions parallel restriction rules use omitted recv send par chan emb step sync abort fig concurrent transactional reductions omitting symmetric rules structural equivalence identify terms reordering parallel processes extrusion scope restricted channels spirit semantics rule step propagates reductions default processes respective transactions remaining rules taken transccs rule emb encodes embedding process parallel transaction enables communication default also keeps current continuation alternative case aborts illustrate mechanics embed rule let consider nested transaction running parallel process send send recv commit recv commit atomic recv commit two embedding transitions recv commit recv commit communicate inner transaction recv commit send commit next least two options either commit spawns process causes commit input embedded let assume latter occurs recv commit send commit recv commit transactions ready commit using rule commit commits necessary guarantee transactions communicated reached agreement commit also important consequence making following three processes behaviorally indistinguishable therefore implementation tcml dealing first three processes pick alternative mutual embeddings transactions without affecting observable outcomes program fact one transactions possibility committing two transactions never communicate implementation decide never embed two transactions crucial creating implementations embed processes transactions necessary communication pick efficient available embeddings development implementations efficient embedding strategies one main challenges project scaling communicating transactions pragmatic programming languages similarly aborts entirely abort left discretion underlying implementation thus example transaction abort stage discarding part computation examples usually multitude transactions aborted cases forward reduction possible due deadlock aborts necessary making tcml programmer charge aborts commits desirable since purpose communicating transactions lift burden manual error prediction handling minimizing aborts automatically picking aborts undo fewer computation steps still rewinding program back enough reach successful outcome another major challenge project sno scenario simply implemented tcml using restarting transactions restarting transaction uses recursion identical transaction case abort atomicrec def fun atomic transactional implementation sno participants discussed introduction simply wraps code restating transactions let alice atomicrec sync dinner sync movie commit let bob atomicrec sync dinner sync dancing commit let carol atomicrec sync dancing commit let david atomicrec sync dancing sync movie commit spawn alice spawn bob spawn carol spawn david dinner dancing movie implementations csp synchronization channels sync function synchronize channels compared transaction trie sched gath abort embed commit notif ack fig tcml runtime architecture potential implementation sno cml simplicity code evident version bob communicating babysitter simple however discuss sect simplicity comes severe performance penalty least straightforward implementations tcml essence code asks underlying transactional implementation solve satisfiability problem leveraging existing useful heuristics problems something intend pursue future work following sections describe implementation transactional scheduling decisions plugged number heuristic transactional schedulers developed evaluated work shows although advanced heuristics bring measurable performance benefits exponential number runtime choices require development innovative compilation execution techniques make communicating transactions realistic solution programmers extensible implementation architecture developed interpreter tcml reduction semantics concurrent haskell different decisions transitions semantics briefly explain runtime architecture interpreter shown fig main haskell threads shown round nodes figure concurrent functional expression interpreted thread according sequential reduction rules fig previous section expression generally handled interpreting thread creating new channels spawning new threads starting new transactions except new channel creation evaluation expression cause notification shown dashed arrows fig sent gatherer process process responsible maintaining global view state running program trie essentially represents transactional structure program logical nesting transactions processes inside running transactions data ttrie ttrie threads children set threadid map transactionid ttrie ttrie node represents transaction program main information stored node set threads threads transactions children running transactional level child transaction associated ttrie node invariant thread transaction identifier appears example complex program saw page recv commit recv commit atomic recv commit tidp associated trie ttrie threads tidp children ttrie threads children ttrie threads children last ingredient runtime implementation scheduler thread sched fig makes decisions commit embed abort transitions performed expression threads based information trie decision made scheduler appropriate signals implemented using haskell asynchronous exceptions sent running threads shown dotted lines fig implementation parametric precise algorithm makes scheduler decisions following section describe number algorithms tried evaluated scheduler signal received thread cause update local transactional state thread affecting future execution thread local state thread object tprocess data tprocess expr expression ctx context alternative data alternative tname transactionid tprocess local state maintains expression expr evaluation context ctx currently interpreted thread list alternative processes represented objects alternative list contains continuations stored thread embedded transactions nesting transactions list mirrors transactional nesting global trie thus compatible transactional nesting expression threads let back example page recv commit recv commit atomic recv commit tidp send send embedded thread evaluating local state object expr tname tname recording fact thread running part turn inside either transactions aborts thread rollback list alternatives appropriately updated aborted transaction removed transactional reconfiguration performed thread acknowledgment sent back gatherer discussed responsible updating global transactional structure trie closes cycle transactional reconfigurations initiated process starting new transaction thread scheduler issuing commit embed abort described far simple architecture interpreter tcml various improvements possible addressing message bottleneck gatherer beyond scope paper following section discuss various policies scheduler evaluate experimentally transactional scheduling policies goal investigate schedulers make decisions transactional reconfiguration based runtime heuristics currently working advanced schedulers including schedulers take advantage static information extracted program leave future work important consideration designing scheduler adequacy chap sec given program adequate scheduler able produce outcomes operational semantics produce program however mean scheduler able produce traces semantics many traces simply abort restart computations previous work behavioral theory communicating transactions shown program outcomes reached traces never restart computation thus goals schedulers minimize minimizing number aborts moreover discussed end sect many exponential number embeddings avoided without altering observable behavior program done embedding process inside transaction embedding necessary enable communication process transaction take advantage communicationdriven scheduler describe section even reducing number possible choices faced scheduler cases still left multitude alternative transactional reconfiguration options likely lead efficient traces however preserve adequacy exclude options since scheduler way foresee outcomes cases assign different probabilities available choices based heuristics leads measurable performance improvements without violating adequacy course program outcomes might likely appear others approach trading measurable fairness performance improvement however probabilistic approach theoretically fair every finite trace leading program outcome probability diverging traces due sequential reductions also probability occur traces zero probability reduction semantics infinite number reductions intuitively unfair traces abort restart transactions infinitum even options possible random scheduler first scheduler consider random scheduler whose policy simply point select one nondeterministic choices equal probability without excluding choices scheduler abort embed commit actions equally likely happen although naive scheduler particularly efficient one would expect obviously adequate fair scheduler according discussion reduction transition available infinitely often scheduler eventually select scheduler leaves much room improvement suppose transaction ready commit since makes distinction choices committing aborting often unnecessarily abort processes embedded transaction roll back transaction restarts transaction also results considerable performance penalty similarly scheduler might preemptively abort transaction could committed given enough time embeddings purpose communication staged scheduler staged scheduler partially addresses issues prioritizing available choices whenever transaction ready commit scheduler always decide send commit signal transaction aborting embedding another process violate adequacy continuing algorithm let examine adequacy prioritizing commits transactional actions example example consider following program ready commit embedding leads program outcome outcome also reached committing residual alternatively program outcome could reachable aborting process however spawned one previous states program current trace state transaction necessarily form commit state abort enabled therefore staged interpreter indeed allows trace leading program state outcome question reachable commit possible transaction staged interpreter prioritizes embeds transaction aborting transaction adequate decision transactions take abort reduction embed step equivalent abort reduction step commit embed options available transaction staged interpreter lets transaction run probability giving chances make progress current trace probability aborts numbers number experiments benefit heuristic implemented scheduler minimizes unnecessary aborts improving performance drawback abort transactions often thus program outcomes reachable transactional alternatives less likely appear moreover scheduler avoid unnecessary embeddings scheduler avoid spurious embeddings scheduler improves performing embed transition necessary imminent communication example following program state embedding process never chosen recv send however process reduces output embedding enabled equivalence previously discussed scheduler adequate implementation scheduler augment information stored trie sect channel thread waiting communicate see sect heuristic significantly boosts performance greatly reduces exponential number embedding choices scheduler final scheduler report adds minor improvement upon scheduler scheduler keeps timer running transaction transaction trie timer reset whenever communication transactional operation happens inside transaction considered abort timer expires strategy benefits longrunning transactions perform multiple communications committing scheduler obviously adequate adds time delays evaluation interpreters report experimental evaluation interpreters using preceding scheduling policies interpreters compiled ghc experiments performed windows machine intel coretm ghz processor ram run several versions two programs sno example committed rendezvous number concurrent processes fig experimental results rendezvous number processes compete synchronize channel two processes forming groups three exchange values standard example agreement tcml implementation example process nondeterministically chooses leader follower within communicating transaction leader two followers communicate exchange values commit situation leads deadlock eventually abort transactions involved sno example introduction implemented sect multiple instances alice bob carol david processes test scalability schedulers tested number versions programs different number competing parallel processes process programs continuously performs sno cycles interpreters instrumented measure number operations given time compute mean throughput successful sno operations results shown fig graph figure contains mean throughput operations logarithmic scale function number competing concurrent tcml processes graphs contain runs scheduler discussed random staged timed aborts well ideal program ideal program case similar tcml implementation ideal version sno running simpler instance scenario without carol instance deadlocks therefore needs error handling ideal programs give performance upper bound predictable random scheduler performance worst many cases could perform operations window measurements schedulers perform better order magnitude even prioritizing transactional reconfiguration choices significantly cuts exponential number inefficient traces however none schedulers scale programs processes performance deteriorates exponentially fact timedaborts scheduler see worst throughput larger process pools many competing processes possibility enter path deadlock cases results suggest better abort early upper bound performance shown throughput one order magnitude best interpreter concurrent processes within range experiments two orders many concurrent processes performance increasing processes due better utilization processor cores clear order achieve pragmatic implementation tcml need address exponential nature consensus scenarios ones tested exploration purely runtime heuristics shows performance improved need turn different approach close gap ideal implementations abstract tcml implementations conclusions future work consensus often occurring problem concurrent distributed programming need developing programming language support consensus already identified previous work transactional events communicating memory transactions cmt transactors cjoin approaches propose forms restarting communicating transactions similar described sect cmt transactors used implement instance saturday night sno example paper extends cml events transactional sequencing operator transactional communication resolved runtime search threads exhaustively explore possibilities synchronization avoiding runtime aborts cmt extends stm asynchronous communication maintaining directed dependency graph mirroring communication transactions stm abort triggers cascading aborts transactions received values aborting transactions transactors extend actor semantics primitives enabling composition systems consistent distributed state via distributed checkpointing cjoin calculus extends join calculus isolated transactions merged merging aborting managed programmer offering manual alternative tcml nondeterministic transactional operations unclear write straightforward implementation sno example cjoin reference implementations developed cmt cjoin discovery efficient implementations communicating transactions could equally beneficial approaches stabilizers add transactional support presence transient faults directly address concensus scenarios sno example paper presented tcml simple functional language support consensus via communicating transactions construct robust behavioral theory supporting use programming language abstraction automatic error recovery tcml simple operational semantics simplify programming advanced consensus scenarios introduced example sno natural encoding tcml usefulness communicating transactions applications however depends invention efficient implementations paper described obstacles need overcome first results recently started project developing implementations gave framework develop evaluate current future runtime schedulers communicating transactions used examine schedulers based solely runtime heuristics found heuristics improve upon performance naive randomized implementation scale programs significant contention exponential number alternative computation paths lead necessary rollbacks clear purely dynamic strategies lead sustainable performance improvements future work intend pursue direction based extraction information source code guide language runtime information include abstract model communication behavior processes used predict high probability future communication pattern promising approach achieve development technology type effect systems static analysis although scheduling communicating transactions theoretically computationally expensive realistic performance many programming scenarios could achievable references bruni melgratti montanari cjoin join communicating transactions appear mscs vries koutavas hennessy liveness communicating transactions aplas donnelly fluet transactional events icfp field varela transactors programming model maintaining globally consistent distributed state unreliable environments popl harris marlow jones herlihy composable memory transactions commun acm herlihy shavit art multiprocessor programming kaufmann jones gordon finne concurrent haskell popl kshemkalyani singhal distributed computing principles algorithms systems cambridge university press lesani palsberg communicating memory transactions ppopp marlow jones moran reppy asynchronous exceptions haskell pldi reppy concurrent programming cambridge university press spaccasassi transactional concurrent tech vries koutavas hennessy communicating transactions concur ziarek schatz jagannathan stabilizers modular checkpointing abstraction concurrent functional programs icfp | 6 |
network recurrent neural networks wang oct school software beijing jiaotong university beijing china oujago abstract describe class systems theory based neural networks called network recurrent neural networks introduces new structure level rnn related models rnns viewed neurons used build layers specifically propose several methodologies design different topologies according theory system evolution carry experiments three different tasks evaluate implementations experimental results show models outperform simple rnn remarkably number parameters sometimes achieve even better results gru lstm introduction recent years recurrent neural networks rnns elman widely used natural language processing nlp traditionally rnns directly used build final models paper propose novel idea called network recurrent neural networks utilizes existing basic rnn layers make structure design layers standpoint systems theory von bertalanffy von bertalanffy recurrent neural network group organization made number interacting parts actually viewed complex system complexity dialectically every system relative system parts also part larger system structures rnn viewed neuron several neurons used build layers rather directly used construct whole models conventionally three levels structure deep neural networks dnns neurons layers whole nets called models perspective systems theory level increasing complexity novel features exist lower levels emerge lehn example neurons level single neuron simple generalization capability poor certain number neurons accumulated certain elaborate structure certain ingenious combinations layers higher level begin get unprecedented ability classification feature learning importantly copyright association advancement artificial intelligence rights reserved new gained capability property deducible reducible constituent neurons lower levels property simple superposition constituent neurons whole greater sum parts systems theory kind phenomenon known whole emergence wierzbicki whole emergence often comes evolution system arthur others system develops lower level higher level simplicity complexity paper motivation structures introduce new structure level networks transferring traditional rnn system agent outer dimension inner dimension fromm brian arthur arthur others identified three mechanisms complexity tends grow systems evolve mechanism increase diversity agent system seem new instance agent class type species result system seems new external agent types capabilities mechanism increase structural sophistication individual system steadily accumulates increasing numbers new systems parts thus newly formed system seems new internal subsystems capabilities mechanism increase capturing software system capture simpler elements learns program software used ends paper guidance first two mechanisms introduce two methodologies structures design named aggregation specialization aggregation specialization natural operations increasing complexity complex systems fromm former related arthur second mechanism traditional rnns aggregated accumulated highlevel layer accordance specific structure latter related arthur first mechanism rnn agent layer specialized rnn agent performs specific function make several implementations carry experiments three different tasks including sentiment classification question type classification named entity recognition experimental results show models outperform constitute simple rnn remarkably ber parameters achieve even better results gru lstm sometimes background systems theory systems theory originally proposed biologist ludwig von bertalanffy von bertalanffy von bertalanffy biological phenomena biology systems several different levels begin smallest units life reach largest extensive category molecule cell tissue organ organ system organization etc traditionally system could decomposed individual components component could analyzed independent entity components could added linear fashion describe totality system walonick however von bertalanffy argued fully comprehend phenomenon simply breaking elementary parts reforming instead need apply global systematic perspective underline functionality mele pels polese system characterized interactions components nonlinearity interactions walonick whole emergence systems theory phenomenon whole irreducible parts known emergence whole emergence wierzbicki emergence qualitatively described whole greater sum parts upton janeka ferraro also quantitatively expressed arbitrary sequences inputs formally given sequence vectors equation simple rnn elman parameter matrices denotes nonlinearity function tanh relu simplicity neuron biases omitted equation actually rnns behave chaotically works analysing rnns theoretically experimentally perspective systems theory sontag provided exposition research regarding systemtheoretic aspects rnns sigmoid activation functions bertschinger analyzed computation edge chaos rnns calculated critical boundary parameter space transition ordered chaotic dynamics takes place pascanu mikolov bengio employed dynamical systems perspective understand exploding gradients vanishing gradients problems rnns paper obtain methodologies systems theory conduct structure designs rnn related models network recurrent neural networks overall architecture whole system consists parts part philip anderson highlighted idea emergence article different anderson stated change scale often causes qualitative change behavior system example human brains one examines single neuron nothing suggests conscious collection millions neurons clearly able produce wonderful consciousness mechanisms behind emergence complexity used design neural network structures one widely accepted reasons repeated application combination two complementary forces operations stretching folding physics term thompson stewart splitting merging computer science term hannebauer specialization cooperation sociology term merging aggregating agents means generally number agents aggregated conglomerated single agent splitting specializing means agents clearly separated agent constrained certain class role fromm recurrent neural networks edge chaos recurrent neural networks rnns werbos elman class deep neural networks possess internal memory due recurrent connections units makes able process figure overview structure illustration shown figure architecture structure summarize architecture four components component input output control head tail layer component subnetworks charge spatial extension component memories responsible temporal extension whole structure describe component follows component component controls head architecture data preprocessing tasks distributes processed input data subnetworks form upcoming input data may various one single vector several vectors multigranularity information even feature vectors noise one single vector may simplest situation common solution copying vector duplicates feed one single subnetwork component paper copying method meets needs layer another layer layer layer figure sectional views layers one means component means component means rnn neuron formalize xit means copy function xit fed subnetwork component component manages memories whole layer internal also external memories weston chopra bordes paper component considers internal memory apply extra processing individual memory rnn neuron mjt means identity function superscript identifier rnn neuron mjt memory jth rnn neuron transformation output rnn neuron component component made several different subnetworks interaction may exist subnetworks responsibility component manage logic subnetwork handle interaction suppose component component receives inputs produces outputs output generated necessary inputs memories skt skt output needed inputs memories nonlinear function rnn rnn etc component form layer need certain amount neurons one properties multiple rnns natural approach integrate multiple rnn neurons signals collecting outputs first using mlp layer measure weights outputs traditional neuron outputs single real value collection method directly arranging vector rnn neurons different outputs vector value simple method concatenating vectors connecting notation subnetwork different neuron one subnetwork may composed several neurons use superscript identifier subnetwork input data subnetwork denoted xit paper use simple rnn elman applied relu activation basic rnn neuron thus memory output last concatenated vector next mlp another pooling rnn output vector real value arranging real values vector seems traditional neurons paper former solution used formalized concatenated vector weight mlp means relu activation function mlp methodology aggregation operation changing boundary cause emergence complexity natural boundary agent sudden emergence complexity possible boundary complexity transfered agent system vice versa system agent two basic operations aggregation specialization used transfer complexity different dimensions fromm according arthur second mechanism internal complexity increased aggregation composition means number rnn agents conglomerated single big system way aggregation composition transfer traditional rnn outer inner dimension system agent selected rnns accumulated become part larger group concrete layer suppose composed subnetworks subnetwork made rnn neurons given input operation flow follows component copy duplications using equation get xnt component deliver memory rnn neuron last current using equation get memories first subnetwork memories second subnetwork etc component subnetwork take advantage input xit memories get nont linear transformation output sit xit get snt component concatenate outputs equation use mlp function determine much signals subnetwork flow component equation obviously number type interaction aggregated rnns determine internal structure inner complexity newly formed layer system thus propose three kinds topologies aggregation method systems theory natural description complex system system created replication adaptation replication means copy reproduce new rnn agent adaptation means totally changes weights somewhere else variation increase diversity system shown figure layer called manor composed four parallel rnns figure shows layer unrolled full network subnetwork layer rnn thus subnetwork component calculated sit oit xit mit means relu activation function parameters corresponding rnn neuron oit nonlinear transformation output delivered next used sit output subnetwork equal oit figure unfolding three introduce new agent type system learn sequence dependencies different timescales figure shows layer made four subnetworks two rnns others rnns two kinds timescale dependencies learned component formalized follows mentioned aggregation composition operation lead big rnn turn also combined form even bigger group repeated aggregation high accumulation makes fractal structure come figure unfolding three nonlinear function equation subnetwork may complex example figure shows layer made three rnns subnetwork component calculated sit xit combination multiple rnns layer makes somewhat like ensemble empirically diversity among members group agents deemed key issue ensemble structure kuncheva whitaker one way increase diversity use topology figure unfolding three shown figure also use three paths path first learns intermediate tation second layers gather intermediate representations three paths learn abstract features way different paths learn train independently connections among helps model easy share informations thus becomes possible whole model learns trains organic rather parallel independent structure formalize cooperation component follows figure gate specialization methodology specialization mentioned emergence complexity usually connected transfer complexity transfer boundary system aggregation composition transfer complexity system agent outer dimension inner dimension another way used cross agent boundary specialization inheritance transfer complexity agent system inner dimension outer dimension fromm specialization related arthur first mechanism increases structural sophistication outside agent adding new agent forms inheritance specialization objects become objects certain class agents become agents certain type agent becomes particular class type needs delegate special tasks handle alone agents fromm effect specialization emergence delegation division labor newly formed groups thus formalization output component rewritten following skt specialized agent function means cooperation specialized agents number specialized agents equation denotes function equation implemented separated operations see gate mechanism one specialization methods shown figure general rnn agent separated two specialized rnn agents one gate duty generalization duty concrete shown original layer rnn agent specialized one generalization specific rnn one gate specific rnn figure sectional views layer one formalize denotes sigmoid activation multiplication denotes relationship lstm gru see long shortterm memory lstm hochreiter schmidhuber gated recurrent unit gru chung two special cases network recurrent neural networks take lstm example given input previous memory cell hidden state transition equations standard lstm expressed following tanh tanh perspective network recurrent neural networks lstm made four rnns three task sentiment classification question classification named entity recognition params irnn gru lstm table number hidden neurons rnn gru lstm network size specified terms number parameters weights four rnns specialized gate tasks control much informations let different parts moreover shared memory accessed rnn cell lstm turn lstm gru also combined form even bigger group experiments order evaluate performance presented model structures design experiments following tasks sentiment classification question type classification named entity recognition compare models comparable parameter numbers validate capacity better utilizing parametric space order verify effectiveness universality experiments conduct three comparative tests total parameters different orders magnitude see table every experiment repeated times different random initializations report mean results worthy noting aim compare model performance settings achieve best performance one single model jaitly hinton showed initializing recurrent weight matrix identity matrix biases zero simple rnn composed relu activation function named irnn comparable even outperform lstm experiments basic rnn neurons simple rnns applied relu function also keep number hidden units rnn neurons model obviously baseline model single giant simple rnn elman applied relu activation time two improved rnns gru chung lstm hochreiter schmidhuber widely successfully used nlp recent years also choose baseline models glove google news obtained word embeddings training fix word embeddings learn parameters models embeddings words set zero vectors pad crop input sentences fixed length https https trainings done stochastic gradient optimizer descent shuffled optimizer adam kingma models regularized using dropout srivastava method time order avoid overfitting early stopping applied prevent unnecessary computation training details setting found codes publicly available sentiment classification evaluate models task sentiment classification popular stanford sentiment treebank sst benchmark socher consists movie reviews split train dev test sst provides detailed annotation sentences along phrases annotated labels positive positive neural negative negative experiments use annotation one goals avoid expensive phraselevel annotation like qian huang zhu another practice annotation hard provide models use architecture embedding layer dropout layer layer layer layer dropout layer softmax layer first layer word embedding layer next layers feature transformation layer layer transformed feature vectors selecting max value position get sentence representation finally softmax layer used output layer get final result benefit regularization two dropout layers rate added embedding layer softmax layer initial learning rates models set use public available glove vectors initialize word embeddings three different network sizes tested architecture number parameters roughly see table set minibatch size finally use criterion loss function results experiments shown table obvious models get superior performances compared irnn baseline especially network size big enough models improve network size grows among models gets best results model irnn gru lstm params params params table accuracy comparison different experiments sst corpus however find lstm gru get much better results three comparative tests consists sentences training set sentences validation set sentences test set model irnn gru lstm params params params table comparison different experiments corpus question type classification question classification important step question answering system classifies question specific type task use trec roth benchmark divides questions categories location human entity abbreviation description numeric terc provides labeled questions training set questions test randomly select training data validation set model irnn gru lstm params params params table accuracy comparison different experiments trec corpus network types use architecture embedding layer dropout layer layer layer dropout layer softmax layer dropout rates set three hidden layer sizes chosen total number parameters whole model roughly see table networks use learning rate trained minimize cross entropy error table shows accuracy different networks question type classification task models get better results baseline irnn model among models also gets best result dataset find performances lstm gru even comparable irnn proves validity results jaitly hinton named entity recognition named entity recognition ner classic nlp task tries identity proper names persons organizations locations entities given text experiment dataset tjong kim sang meulder recently popular ner models based bidirectional lstm combined conditional random fields crf named lample networks effectively use past future features via layer sentence level tag information via crf layer experiments also adapt architecture replacing lstm nors variation rnns universal architecture tested models embedding layer dropout layer layer crf layer three hidden layer sizes chosen total number parameters whole network roughly see table apply dropout embedding layer initial learning rate set every epoch reduced factor size minibatch train networks epochs early stop training epochs improvement validation set results summarized table surprisingly nors perform much better giant single rnnrelu model see gru performs worst followed irnn compared gru irnn lstm performs well especially network size grows time models get superior performances irnn gru lstm among model get best results conclusion conclusion introduced novel kind systems theory based neural networks called network recurrent neural network views existing rnns example simple rnn gru lstm neurons utilizes rnn neurons design layers proposed several methodologies design different topologies according evolution systems theory arthur others conducted experiments three kinds tasks including sentiment classification question type classification named entity recognition evaluate proposed models experimental results demonstrated models get superior performances compared single giant rnn models sometimes performances even exceed gru lstm references anderson anderson different science arthur others arthur evolution complexity technical report bertschinger bertschinger computation edge chaos recurrent neural networks neural computation chung chung gulcehre cho bengio empirical evaluation gated recurrent neural networks sequence modeling arxiv preprint elman elman finding structure time cognitive science fromm fromm emergence complexity kassel university press kassel hannebauer hannebauer autonomous dynamic reconfiguration systems improving quality efficiency collaborative problem solving hochreiter schmidhuber hochreiter schmidhuber long memory neural computation kingma kingma adam method stochastic optimization arxiv preprint kuncheva whitaker kuncheva whitaker measures diversity classifier ensembles relationship ensemble accuracy machine learning lample lample ballesteros subramanian kawakami dyer neural architectures named entity recognition arxiv preprint jaitly hinton jaitly hinton simple way initialize recurrent networks rectified linear units arxiv preprint lehn lehn toward complex matter supramolecular chemistry proceedings national academy sciences roth roth learning question classifiers proceedings international conference computational association computational linguistics mele pels polese mele pels polese brief review systems theories managerial applications service science pascanu mikolov bengio pascanu mikolov bengio difficulty training recurrent neural networks icml qian huang zhu qian huang zhu linguistically regularized lstms sentiment classification arxiv preprint socher socher perelygin chuang manning potts recursive deep models semantic compositionality sentiment treebank proceedings conference empirical methods natural language processing emnlp volume citeseer sontag sontag recurrent neural networks aspects dealing complexity neural network approach citeseer srivastava srivastava hinton krizhevsky sutskever salakhutdinov dropout simple way prevent neural networks overfitting journal machine learning research thompson stewart thompson stewart nonlinear dynamics chaos john wiley sons tjong kim sang meulder tjong kim sang meulder introduction shared task named entity recognition proceedings seventh conference natural language learning association computational linguistics upton janeka ferraro upton janeka ferraro whole sum parts aristotle metaphysical journal craniofacial surgery von bertalanffy von bertalanffy general system theory new york von bertalanffy von bertalanffy history status general systems theory academy management journal walonick walonick general systems theory information http statpac htm werbos werbos generalization backpropagation application recurrent gas market model neural networks weston chopra bordes weston chopra bordes memory networks arxiv preprint wierzbicki wierzbicki systems theory theory chaos emergence technen elements recent history information technologies epistemological conclusions springer | 9 |
second cohomology nilpotent orbits exceptional lie algebras nov pralay chatterjee chandan maity abstract second rham cohomology groups nilpotent orbits complex simple lie algebras described paper consider exceptional lie algebras compute dimensions second cohomology groups nilpotent orbits rest cases nilpotent orbits covered computations obtain upper bounds dimensions second cohomology groups introduction let connected real simple lie group lie algebra element called nilpotent nilpotent operator let corresponding nilpotent orbit adjoint action nilpotent orbits form rich class homogeneous spaces studied interface several disciplines mathematics lie theory symplectic geometry representation theory algebraic geometry various topological aspects orbits drawn attention years see references therein account proposition large class semisimple lie groups criterion given exactness two form arbitrary adjoint orbits turn led authors asking natural question describing full second cohomology groups orbits towards second cohomology groups nilpotent orbits complex simple lie algebras adjoint actions corresponding complex groups computed paper continue program studying second cohomology groups nilpotent orbits initiated compute second cohomology groups nilpotent orbits exceptional lie algebras rest nilpotent orbits exceptional lie algebras give upper bounds dimensions second cohomology groups see theorems particular computations yield second cohomologies vanish nilpotent orbits notation background section fix general notation mention basic result used paper specialized notation defined occur later center lie algebra denoted denote lie groups capital letters unless mentioned otherwise denote lie algebras corresponding lower case german letters sometimes convenience lie algebra lie group also denoted lie connected component lie group containing identity element denoted subgroup subset subgroup fixes point wise called centralizer denoted similarly lie subalgebra subset mathematics subject classification key words phrases nilpotent orbits exceptional lie algebras second cohomology chatterjee maity denote subalgebra consisting elements commute every element lie group lie algebra immediate coadjoint action trivial particular one obtains natural action denote space fixed points action real semisimple lie group element called nilpotent nilpotent operator nilpotent orbit orbit nilpotent element adjoint representation nilpotent element corresponding nilpotent orbit denoted lie algebra subset said immediate triple spanr isomorphic lie algebra recall theorem see theorem ensures nilpotent element real semisimple lie algebra exist facilitate computations need following result theorem let algebraic group defined let lie nilpotent element orbit adjoint action identity component lie let lie let maximal compact subgroup maximal compact subgroup containing particular dimr dimr theorem follows lemma description second cohomology groups homogeneous spaces generalizes theorem details proof theorem generalization theorem mentioned appear elsewhere second cohomology groups nilpotent orbits section study second cohomology nilpotent orbits noncomplex exceptional lie algebras results section depend results tables tables tables refer chapter generalities required section begin recalling parametrization nilpotent orbits parametrization nilpotent orbits exceptional lie algebras follow parametrization nilpotent orbits exceptional lie algebras given tables tables consider nilpotent orbits action int real exceptional lie algebra fix semisimple algebraic group defined lie denotes associated real semisimple lie group let associated complex semisimple lie group consisting easy see orbits action int orbits action thus nilpotent element set let cartan decomposition corresponding cartan involution let lie algebra identified complexification let respectively let connected subgroup lie nilpotent orbits lie algebras algebra recall different inner type equivalently rank rank inner type nilpotent orbits parametrized finite sequence integers length rank rank inner type either nilpotent orbits parametrized finite sequence integers length let nonzero nilpotent element another set called triple associated parametrization exceptional lie algebras inner type recall column tables parametrization nilpotent orbits exceptional lie algebra inner type let cartan subalgebra cartan subalgebra inner type cartan subalgebra set let root systems respectively let basis let negative highest root exists unique basis say let closed weyl chamber corresponding basis let rank either set case set clearly enumerate table let nonzero nilpotent element triple associated singleton set say element called characteristic orbit determines orbit uniquely consider map set nilpotent orbits set integer sequences length assigns sequence nilpotent orbits view theorem theorem gives bijection set nilpotent orbits set finite sequences form use parametrization dealing nilpotent orbits exceptional lie algebras inner type parametrization recall column tables parametrization nilpotent orbits either need piece notation henceforth lie algebra automorphism autc lie subalgebra consisting fixed points denoted let cartan subalgebra point difference notation denoted respectively let let involution defined keeps invariant subalgebra type cartan subalgebra let connected lie subgroup lie algebra let simple roots defined let nonzero nilpotent element let triple associated may assume finite sequence integers determine orbit uniquely see let let involution defined keeps invariant subalgebra type cartan subalgebra let connected lie subgroup lie algebra let simple roots defined let nonzero nilpotent element let may triple associated chatterjee maity assume follows finite sequence integers determine orbit uniquely see nilpotent orbits three types sake convenience writing proofs appear later part useful divide nilpotent orbits following three types let nonzero nilpotent element let beginning let maximal compact subgroup maximal compact subgroup containing nonzero nilpotent orbit said type type either type iii follows use next result repeatedly corollary let real simple exceptional lie algebra let nonzero nilpotent element orbit type dimr dimr orbit type dimr dimr orbit type iii dimr proof proof corollary follows immediately theorem let proofs results following subsections use description levi factor nilpotent element given last columns tables tables enables compute dimensions dimr easily also use column tables component groups nilpotent orbits nilpotent orbits real form recall conjugation one real form denote five nonzero nilpotent orbits see table note case theorem let parametrization nilpotent orbits let nonzero nilpotent element parametrization orbit given either dimr parametrization orbit given dimr proof column table dimr column table nilpotent orbits thus type refer column table orbits given orbits type iii dimr view corollary conclusions follow nilpotent orbits real forms recall conjugation two real forms denoted nilpotent orbits lie algebras nilpotent orbits nonzero nilpotent orbits see table vii note case theorem let parametrization nilpotent orbits let nonzero nilpotent element assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr parametrization orbit either dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table vii column table orbits dimr hence type orbits dimr hence type orbits dimr hence also type rest orbits given parametrizations type iii theorem follows corollary nilpotent orbits two nonzero nilpotent orbits see table viii theorem nilpotent elements dimr proof theorem follows trivially assume follow parametrization nilpotent orbits last column table viii conclude hence nonzero nilpotent orbits type iii using corollary dimr nilpotent orbits real forms recall conjugation four real forms denoted nilpotent orbits nonzero nilpotent orbits see table viii note case theorem let parametrization nilpotent orbits let nonzero nilpotent element parametrization orbit given either dimr assume parametrization orbit given sequences dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table viii column table pointed paragraph error row table viii thus given parametrization follows chatterjee maity dimr orbits given thus orbits type orbits dimr hence orbits type rest nonzero nilpotent orbits given parametrizations type iii dimr results follow corollary nilpotent orbits nonzero nilpotent orbits see table note case theorem let parametrization nilpotent orbits let nonzero nilpotent element assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr parametrization orbit given either dimr parametrization orbit given dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table column table orbits given orbits type iii orbits given dimr thus orbits type orbits given dimr hence type orbits given dimr thus orbit type rest orbits given dimr thus orbits type conclusions follow corollary nilpotent orbits nonzero nilpotent orbits see table note case hence theorem let parametrization nilpotent orbits let nonzero nilpotent element parametrization orbit given dimr given parametrization orbits dimr proof lie algebra easily compute dimr last column table orbit type iii hence dimr orbits type dimr hence dimr nilpotent orbits two nonzero nilpotent orbits see table vii theorem nilpotent element dimr nilpotent orbits lie algebras proof theorem follows trivially assume follow parametrization nilpotent orbits given two nonzero nilpotent orbits type iii see last column table vii hence corollary conclude dimr nilpotent orbits real forms recall conjugation three real forms denoted nilpotent orbits nonzero nilpotent orbits see table note case theorem let parametrization nilpotent orbits let nonzero nilpotent element parametrization orbit given dimr assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr parametrization orbit given either dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table column table orbit given type dimr orbits given dimr hence also type orbits given dimr hence type orbits given dimr thus type orbits given dimr hence also type orbits given dimr hence type rest orbits given parametrizations type iii results follow corollary nilpotent orbits nonzero nilpotent orbits see table xii note case chatterjee maity theorem let parametrization nilpotent orbits let nonzero nilpotent element parametrization orbit given either dimr assume parametrization orbit given sequences dimr parametrization orbit given either dimr assume parametrization orbit given sequences dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table xii column table orbit dimr hence orbits type orbit dimr hence orbits also type orbit dimr hence type orbit dimr hence also type rest orbits given parametrizations type iii conclusions follow corollary nilpotent orbits nonzero nilpotent orbits see table xiii case theorem let parametrization nilpotent orbits let nonzero nilpotent element assume parametrization orbit given sequences dimr given parametrization orbits dimr proof note parametrization nilpotent orbits table different table iii component group orbits see column table depend parametrization refer last column table iii orbits given type iii rest orbits dimr see last column table iii type results follow corollary nilpotent orbits real forms recall conjugation two real forms denoted nilpotent orbits nonzero nilpotent orbits see table xiv note case nilpotent orbits lie algebras theorem let parametrization nilpotent orbits let nonzero nilpotent element assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr parametrization orbit given dimr assume parametrization orbit given sequences dimr assume parametrization orbit given sequences dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table xiv column table orbits given dimr hence orbits type orbits given dimr hence orbits also type orbit given dimr hence type orbits given dimr thus orbits type orbits given dimr hence type rest orbits given parametrizations type iii conclusions follow corollary nilpotent orbits nonzero nilpotent orbits see table note case theorem let parametrization nilpotent orbits let nonzero nilpotent element assume parametrization orbit given sequences dimr parametrization orbit given either dimr given parametrizations orbits dimr proof lie algebra easily compute dimr last column table column table chatterjee maity orbits given dimr hence type orbits given dimr hence orbits type rest orbits given parametrizations type iii conclusions follow corollary remark make observations first cohomology groups nilpotent orbits real exceptional lie algebras begin giving convenient description first cohomology groups nilpotent orbits following theorem shown dimr proof result appear elsewhere consequences nilpotent orbit simple lie algebra dimr recall real exceptional lie algebra maximal compact subgroup int semisimple hence using follows dimr nilpotent orbit next assume note cases follow parametrizations nilpotent orbits given tables xiii see also able conclude dimr one orbit namely orbit parametrized case last column row table one thus applies obtain dimr parametrized following sequences orbits last column table xiii hence using analogous arguments apply references biswas chatterjee exactness form second cohomology nilpotent orbits internat math collingwood mcgovern nilpotent orbits semisimple lie algebras van nostrand reinhold mathematics series van nostrand reinhold new york djokovic classification nilpotent elements simple exceptional real lie algebras inner type description centralizers alg djokovic classification nilpotent elements simple exceptional real lie algebras description centralizers alg donald king component groups nilpotents exceptional simple real lie algebras communications algebra mcgovern adjoint representation adjoint action algebraic quotients torus actions cohomology adjoint representation adjoint action encyclopaedia math springer berlin institute mathematical sciences hbni campus tharamani chennai india address pralay institute mathematical sciences hbni campus tharamani chennai india address cmaity | 4 |
sep sheaf second spectrum mustafa alkan cekensecil alkan abstract let commutative ring identity specs denote set second submodules paper construct study sheaf modules denoted specs equipped dual zariski topology give characterization sections sheaf terms ideal transform module present interrelations algebraic properties sections obtain morphisms sheaves induced ring module homomorphisms mathematics subject classification keywords phrases second submodule dual zariski topology sheaf modules introduction throughout article rings commutative rings identity elements modules unital left modules unless otherwise stated denote ring given annihilator denoted annr ideal annihilator defined set clearly submodule recall sheaf rings modules topological space assignment ring module open subset together inclusion open subsets morphism rings modules subject following conditions idf iii open subset open cover element paper submitted communications algebra june referee process open subset open cover collection elements property element sheaf topological space refer sections open subset call maps restriction maps prime spectrum ring denoted spec consists prime ideals ideal sets spec ideal satisfy axioms closed sets topology spec called zariski topology commutative ring sheaf rings spec denoted ospec defined follows open subset spec define ospec set functions neighborhood contained elements see let proper submodule said prime annr prime submodule prime ideal case called submodule set prime submodules module called prime spectrum denoted spec submodule set spec sets submodule satisfy axioms closed sets topology spec called zariski topology several authors investigated prime spectrum zariski topology module last twenty years see example recently authors investigated sheaf structure prime spectrum module generalizes sheaf rings ospec topological space spec author obtained ospec open subset spec equipped zariski topology ospec sheaf modules spec authors defined studied sheaf modules denoted topological space spec equipped zariski topology two fact ospec generalizations sheaf rings ospec modules authors proved scheme spec scheme structure investigated recently dual theory prime submodules developed extensively studied many authors dual notion prime submodules first introduced yassemi submodule said second submodule provided second submodule annr prime ideal case called submodule recent years second submodules attracted attention various authors studied number papers see example set second submodules module called second spectrum denoted specs submodule rmodule define set second submodules contained clearly empty set specs note family submodules thus denotes collection subsets specs contains empty set specs closed arbitrary intersections general closed finite unions module called cotop module closed finite unions case called topology specs see note cotop module called tops information class cotop modules found let submodule define set specs annr annr lemma shown annr annr particular every ideal set satisfies axioms closed sets thus exists topology say specs family closed subsets topology called dual zariski topology see lemma dual zariski topology second spectrum modules related notions investigated authors recent years see paper define study sheaf structure second spectrum module let section construct sheaf denoted specs equipped dual zariski topology firstly find stalk sheaf see theorem theorem give characterization sections sheaf terms ideal transform module let noetherian ring faithful secondful prove free projective flat specs see theorem section deal scheme structure second spectrum module theorem prove scheme faithful secondful specs define two morphisms locally ringed spaces using ring module homomorphisms see theorem corollary sheaf structure second spectrum module throughout rest paper denote specs consider dual zariski topology unless otherwise stated every open subset set supps annr section construct sheaf investigate properties sheaf definition let every open subset define set elements open neighborhood exist elements every annr let open subsets clear restriction belongs therefore restriction map define zero map clear local nature definition sheaf restriction maps defined define map note homomorphism clearly recall set spec open spec family forms base zariski topology spec let define theorem shown set forms base dual zariski topology remark let annr let suppose annr implies contradiction conversely let annr proof results use fact without comment let sheaf modules rings topological space recall stalk defined direct limit modules rings open subsets containing via restriction maps see following theorem determine stalk sheaf point theorem let stalk sheaf isomorphic annr proof let submodule exists open neighborhood represents define let another neighborhood also represents exists open set since shows map claim isomorphism let since annr define annr equivalence class hence surjective let let open neeighborhood representative open neighborhood elements annr yth annr thus therefore yth consequently shows injective thus isomorphism ringed space pair consisting topological space sheaf rings ringed space called locally ringed space point stalk local ring corollary locally ringed space example consider prime number specs theorem zpz let map specs spec defined annr called natural map specs said secondful natural map surjective let zariski socle submodule denoted defined sum members defined lemma let noetherian ring let secondful exist annr ysi ysi annr proof since noetherian corollary thus exist open subsets tjj annr fix submodule also since noetherian annr rbjnj bjnj implies annr rbjf rbjnj rbjf rbjf ybjf ybjf rbjf hand implies annr annr theorem get annr annr since noetherian exists annr annr annr annr annr annr follows bdjf annr also ybjf bdjf annr conclude ybjf bjf bdjf ytj bdjf write annr nannr completes proof let ideal submodule defined said lemma let proof let exists annr consider supps exists second submodule annr annr annr contradiction thus annr exists annr since ptp hence theorem let noetherian ring faithful secondful let ker ker annr proof lemma ker suppose ker supp supp put rtp let claim annr suppose contrary annr since faithful secondful second submodule annr therefore supps implies contradicts fact thus annr annr annr since noetherian annr hence annr shows result follows lemma let rsi rsni proof let rsni annr rsni annr rsi rsni annr rsni annr rsi annr rsi annr since annr prime ideal rsi annr follows rsi annr annr rsi shows containment rsi hence rsni hence rsni rsni rsi rsi implies rsi rsi rsni rsi theorem let noetherian ring faithful secondful let open subset ker proof lemma ker exists let ker lemma exist annr ysi ysi annr since ker supps fix set ysi hence supp implies theorem exists annr let max let supps exists supps ysi let annr since shi annr dshi annr annr annr annr annr therefore conclude dmi dsh annr implies annr theorem let noetherian ring faithful secondful map annr cokernel proof let lemma exist ysi annr ysi annr fix ysi annr nannr means thus ker theorem hence exists annr define max sni follows sni sni lemma rsi rsi rsi rsi rsi rsi follows rsni rsi rsi ysi rsni since faithful secondful means annr annr hence annr get rsni since noetherian exists annr rsi follows annr rsi completes rsni proof let ideal recall ideal transform respect defined lim homr theorem let noetherian ring faithful secondful unique dannr lim homr annr diagram dannr commutes proof theorems kernel cokernel annr therefore unique dannr given diagram commutes corollary lemma isomorphism corollary iii corollary let noetherian ring faithful secondful rmodule following hold annr proof parts follow theorem corollary part immediate consequence part example consider runs distinct prime numbers faithful secondful let specs annr corollary corollary let principal ideal domain faithful secondful exists localization respect multiplicative set proof since principal ideal domain element annr theorem theorem dannr theorem let faithful secondful rmodule element module isomorphic localized module particular proof define map fam fam claim isomorphism first show injective let fan fbm every fan fbm annr thus exists let holds deduce supps spec since faithful secondful supps spec get implies therefore shows fan fbm thus injective let cover open subsets annr represented agii annr words ygi since open sets form form base dual zariski topology may assume yhi since yhi ygi dhi yhi ygi proposition implies rhi rgi thus hsi rgi hsi cgi cai cai cgi hsi see annr represented cai yki since yhi yhsi yki cover open cover finite subcover theorem suppose ykn kbii kjj represent annr yki ykj corollary yki ykj yki injectivity get kbii kjj nki hence nij nij let max nij kim kjm replacing kim still see annr represented yki kbii furthermore since ykn proposition yki dki pspec spec rki implies ppn rki rki rki pthere let implies ykj fore fat proposition let induces morphism sheaves isomorphism isomorphism sheaves proof let open subset fpp show open neighborhood exist elements every annr exists tap follows means annr every shows thus map defined clearly since following diagram commutative shows morphism sheaves suppose show injective fpp every let supps exists since injective every supps follows fpp tpp fpp every supps shows fpp injective every open subset show surjective let tpp exists supp show open neighborhood exist elements every annr tpp tpp exists tpp annr every exists follows tap since injective tap means tpp annr every shows thus surjective every open subset consequently isomorphism sheaves theorem let noetherian ring faithful secondful following hold free free projective projective rmodule flat flat proof write since free isomorphic direct sum copies say index set proposition theorem commutes direct sums corollary using fact theorem theorem get shows free since projective free submodule using proposition corollary get part free projective direct summand free since every flat direct limit projective lim projective directed set proposition lim theorem lim commutes direct limits corollary using fact theorem theorem get lim lim lim part projective hence flat since direct limit flat modules flat flat scheme structure second spectrum module recall affine scheme locally ringed space isomorphic spectrum ring scheme locally ringed space every point open neighborhood topological space together restricted sheaf affine scheme scheme called locally noetherian covered open affine subsets spec noetherian ring scheme called noetherian locally noetherian topological space said kolmogorov space every pair distinct points exists open neighbourhoods either following proposition gives conditions dual zariski topology proposition theorem following statements equivalent natural map specs spec injective specsp every spec specsp set submodules specs theorem let faithful secondful space scheme moreover noetherian noetherian scheme proof let since natural map specs spec continuous proposition restriction map also continuous since also bijection let closed subset hence annr closed subset therefore homeomorphism since sets form form base dual zariski topology written ygi since faithful secondful ygi ygi dgi spec rgi theorem ygi affine scheme implies scheme last statement note since noetherian rgi hence locally noetherian scheme theorem therefore noetherian scheme theorem let monomorphism induces morphism locally ringed spaces specs specs proof proposition map specs specs defined every specs continuous let open subset specs suppose exists open neighborhood annr since continuous open neighborhood claim annr suppose contrary annr since monomorphism annr annr annr contradiction therefore every open subset specs define map follows defined annr mentioned map clearly ring homomorphism show locally ringed morphism assume open subsets specs consider diagram annr annr therefore get thus diagram commutative shows morphism sheaves theorem map stalks clearly map local rings rannr rannr maps rannr implies specs specs morphism locally ringed spaces theorem let ring homomorphism let secondful specs annr annr induces morphism sheaves proof since annr annr induces homomorphism annr anns maps spec spec defined spec spec defined specs spec defined anns specs continuous also specs spec homeomorphism theorem therefore map specs specs defined continuous also specs get anns banns let open subset specs tannr suppose exists open neighborhood elements tannr aannr annr hence annr definition annr anns every anns annr thus define section define tannr tannr tannr assume consider diagram see tannr tannr tannr tannr tannr hence every open subset specs diagram commutative follows morphism sheaves corollary let ring homomorphism let smodule secondful specs annr annr induces morphism locally ringed spaces specs specs proof taking theorem get morphism sheaves defined proof theorem theorem map stalks clearly local homomorphism anns sanns map defined proof theorem implies specs specs locally ringed spaces acknowledgement authors would like thank scientific technological research council turkey tubitak funding work project second author supported scientific research project administration akdeniz university references abbasi scheme prime spectrum modules turkish math abuhlail dual zariski topology modules topology appl zariski topologies coprime second submodules algebra colloquium farshadifar dual notion prime submodules algebra farshadifar dual notion prime submodules mediterr farshadifar zariski topology second spectrum module algebra colloquium doi farshadifar dual notion prime radicals submodules math doi keyvani farshadifar second spectrum module bull malays math sci soc pourmortazavi keyvani strongly cotop modules journal algebra related topics brodmann sharp local cohomology algebraic introduction geometric applications cambridge univercity press alkan dual zariski topology modules book series aip conference proceedings alkan smith second modules noncommutative rings communications algebra alkan smith dual notion prime radical module journal algebra alkan graded second coprimary modules graded secondary representations bull malays math sci soc alkan second submodules contemporary mathematics alkan second spectrum second classical zariski topology module journal algebra applications doi http second spectrum modules spectral spaces bulletin malaysian mathematical sciences society doi farshadifar modules noetherian second spectrum journal algebra related topics vol hartshorne algebraic geometry new york inc prime submodules sheaf prime spectra modules communications algebra spectra modules comm algebra zariski topology prime spectrum module houston math module whose prime spectrum surjective natural map houston math modules noetherian spectrum comm algebra mccasland moore smith spectrum module commutative ring comm algebra tekir sheaf modules comm algebra yassemi dual notion prime submodules arch math brno trakya university faculty sciences department mathematics edirne turkey mustafa alkan akdeniz university faculty sciences department mathematics antalya turkey | 0 |
towards automatic abdominal segmentation dual energy using cascaded fully convolutional network shuqing michael holger sabrina matthias alexander marc hirohisa kensaku andreas oct university erlangen germany nagoya university nagoya japan german cancer research center dkfz heidelberg germany department radiology university hospital erlangen erlangen germany university hospital paracelsus medical university germany work submitted ieee possible publication copyright may transferred without notice version may longer accessible abstract automatic segmentation dual energy computed tomography dect data beneficial biomedical research clinical applications however challenging task recent advances deep learning showed feasibility use fully convolutional networks fcn dense predictions single energy computed tomography sect paper proposed fcn based method automatic segmentation dect work based cascaded fcn general model major organs trained large set sect data preprocessed dect data using linear weighting model dect data method evaluated using torso dect data acquired clinical system four abdominal organs liver spleen left right kidneys evaluated tested effect weight accuracy researched tests achieved average dice coefficient liver spleen right kidney left kidney respectively results show method feasible promising index dect deep learning segmentation introduction hounsfield unit scale value depends inherent tissue properties spectrum scanning administered contrast media sect image materials different elemental compositions represented identical values therefore sect challenges limited information beam hardening well tissue characterization dect investigated solve challenges sect dect two image data sets acquired two different spectra produced different energies simultaneously segmentation dect beneficial biomedical research clinical applications material decomposition enhanced reconstruction display computation bone mineral density aiming exploiting prior anatomical information gained segmentation provide improved dect imaging novel technique offers possibility present evermore complex information radiologists simultaneously bears potential improve clinical routine diagnosis automatic segmentation dect images challenging task due variance human abdomen complex variance among organs soft anatomy deformation well different values organ different spectra recent researches show power deep learning medical image processing solve dect segmentation problem use successful experience segmentation volumetric sect images using deep learning proposed method based cascaded fcn approach first stage used predict region interest roi target organs second stage learned predict final segmentation prior knowledge required proposed method results showed proposed method promising solve segmentation problem dect best knowledge first study segmentation dect images based fcns materials methods network architecture dect prediction dect described krauss mixed image display employed clinical practice diagnose using dect mixed image calculated linear weighting images values two spectra imix ilow ihigh weight dual energy composition imix denotes mixed image ilow ihigh images low high respectively preprocessed dect images following straightforwardly figure illustrates network architecture proposed method dect segmentation first mixed image calculated combining images low energy level high energy level using binary mask generated thresholding skin contour mixed image subsequently mixed image binary mask labeled image given network inputs network consists two stages first stage applied generate region interest roi order reduce search space second stage prediction result first stage taken mask second stage stage based standard fully convolutional network including analysis synthesis path used implementation two stages cascaded network developed roth based unet caffe deep learning library general model trained roth large set sect images including major organ labels model trained general model mixed dect images difference network output ground truth labels compared using softmax weight loss sect avg min max avg min max liver spleen table dice coefficients abbreviated standard deviation notice methods used different data set numbers directly comparable way training data validation data test data selected randomly ratio test used images validation images test images training results performance estimation nvidia geforce gtx memory used experiments similarity segmentation result ground truth measured dice metric using tool provided visceral first performance proposed method estimated using well fig shows one segmentation results summarizes dice coefficients segmentation results compares dect results sect results proposed method weight condition yielded average dice coefficient liver spleen right kidney left kidney respectively fig plots distributions dice coefficients different test scenarios showed high robustness proposed method experimental setup proposed method evaluated clinical torso dect images scanned department radiology university hospital erlangen images taken male female adult patients different clinically oriented indication justified radiologist ultravist given contrast agent body weight adapted volumes images acquired different tube voltage setting mas mas filter using siemens somatom force system stellar detector energy integrating detector volume consists slices pixels voxel dimensions four abdominal organs tested including liver spleen right left kidneys ground truth generated experts study weight aiming exploiting spectral information dect data since mixing results basically pseudo monochromatic images comparable single energy scans influence weight accuracy researched chosen study fig illustrates distributions dice coefficients different weight combination testing fold table lists average dice coefficient cases liver highest accuracy standard deviation dice coefficients around fairly robust segmentation right kidney usually accurate left kidney best dice values per organ per training set highlighted table test fig cascaded network architecture dect segmentation fig rendering one dect segmentation yellow liver blue spleen green right kidney red left kidney fig dice coefficients target organs alpha blending testing fold obtained highest accuracy liver right kidney test weight combination showed best segmentation spleen combination finest result left kidney generated better segmentation liver worked better spleen discussion conclusion fig dice coefficients target organs different testing folds proposed deep learning based method automatic abdominal segmentation dect evaluation results show feasibility proposed method compared results sect images reported roth method promising robust see table segmentation liver spleen less accurate sect third testing fold large deviation reason could image data taken patients different disease liver tumor spleen tumor disease type considered data selection training liver spleen table dice coefficients different alpha testing fold bold denotes best organ results per training set test inconsistent symptoms could impact accuracy study weight divided three groups different close low energy images average best contrast worked thus better general close optimal fusion images respect ratio snr therefore usually smallest deviation showed strongest adaptability comparison comparison showed cases identical training test conditions higher probability get best segmentation result expected mixed images generated matched training test conditions may highest similarity furthermore comparison case model image case model image showed using model trained images segmenting test images works better addition liver well segmented middle high ranges spleen segmented best kidneys work best matched training test conditions suggests optimal organ image segmentation weight mixed image calculation currently parameter preprocessing approach used augment data training future also net could modified two image inputs furthermore organs scans different patients could used acknowledgments work supported german research foundation dfg research grant references dushyant sahani technological approaches society computed body tomography magnetic resonance cynthia mccollough shuai leng lifeng joel fletcher principles technical approaches clinical applications radiology vol stefan kuchenbecker sebastian faby david simons michael knaup schlemmer michael lell marc kachelriess material decomposition dual energy computed tomography dect radiological society north america rsna sabrina dorn shuqing chen stefan sawall andreas maier michael lell marc organspecific single dual energy dect image reconstruction display analysis radiological society north america sabrina dorn shuqing chen stefan sawall david simons matthias may schlemmer andreas maier michael lell marc organspecific image reconstruction display spie accepted wesarg kirschner becker erdt kafchitsas khan assessment trabecular bone vertebrae methods information medicine vol marc aubreville miguel goncalves christian knipfer nicolai oetter tobias helmut neumann florian stelzle christopher bohr andreas maier carcinoma detection confocal laser endomicroscopy images robustness assessment corr vol holger roth hirohisa oda yuichiro hayashi masahiro oda natsuki shimizu michitaka fujiwara kazunari misawa kensaku mori hierarchical fully convolutional networks segmentation arxiv preprint holger roth ying yang masahiro oda hirohisa oda yuichiro hayashi natsuki shimizu takayuki kitasaka michitaka fujiwara kazunari misawa kensaku mori torso organ segmentation using fully convolutional networks jamit bernhard krauss bernhard schmidt thomas flohr dual energy clinical practice chapter dual source springer berlin heidelberg ahmed abdulkadir soeren lienkamp thomas brox olaf ronneberger learning dense volumetric segmentation sparse annotation medical image computing computer assisted intervention miccai yangqing jia evan shelhamer jeff donahue sergey karayev jonathan long ross girshick sergio guadarrama trevor darrell caffe convolutional architecture fast feature embedding arxiv preprint abdel aziz taha allan hanbury metrics evaluating medical image segmentation analysis selection tool bmc medical imaging vol august | 1 |
freeness rational cuspidal plane curves feb alexandru dimca gabriel sticlaru abstract bring additional support conjecture saying rational cuspidal plane curve either free nearly free conjecture curves even degree note prove many odd degrees particular show conjecture holds curves degree introduction plane rational cuspidal curve rational curve complex projective plane unibranch singularities study curves long fascinating history long standing conjectures coolidgenagata conjecture proved recently see conjectures one number singularities curve bounded see still open classification curves easy wealth examples even additional strong restrictions imposed see free divisors defined homological property jacobian ideals introduced local analytic setting saito extended projective hypersurfaces see references remarked many plane rational cuspidal curves free remaining examples plane rational cuspidal curves available classification lists turned satisfy weaker homological property chosen definition nearly free curve see subsequently number authors establish interesting properties class curves see view remark conjectured conjecture plane rational cuspidal curve either free nearly free conjecture proved theorem curves whose degree even well cases odd prime number note take closer look case odd let polynomial ring three variables complex coefficients reduced homogeneous polynomial degree let partial derivatives respect respectively consider graded relations involving derivatives namely mathematics subject classification primary secondary key words phrases rational cuspidal curves jacobian syzygy tjurina number free curves nearly free curves alexandru dimca gabriel sticlaru afx bfy cfz space homogeneous polynomials degree minimal degree jacobian relation polynomial integer mdr defined smallest integer mdr union lines passing one point hence cuspidal assume note mdr turns rational cuspidal curve mdr nearly free indeed follows proposition see note implication holds assume odd let pkmm prime decomposition assume also case conjecture settled corollary changing order necessary assume set assumptions notations main results note following theorem let rational cuspidal curve degree odd number mdr equality holds either free nearly free theorem let rational cuspidal curve degree odd number either free nearly free particular following hold prime number either free nearly free prime number either free nearly free unless mdr mdr remark note hence therefore cases covered results correspond curves odd degree mdr satisfies corollary rational cuspidal curve degree either free nearly free one following holds mdr unless one following situations mdr freeness rational cuspidal plane curves iii vii mdr mdr mdr mdr mdr mdr excluded situations results allow conclude proof main results based deep result walther see theorem bringing picture monodromy milnor fiber associated curve second ingredient results relations hodge filtration pole order filtration cohomology group see theorem proposition first author thanks aromath team inria excellent working conditions particular laurent stimulating discussions facts free nearly free curves recall basic notions free nearly free curves denote jacobian ideal homogeneous ideal spanned partial derivatives let corresponding graded ring called jacobian milnor algebra let denote saturation ideal respect maximal ideal recall relation local cohomology shown corollary graded satisfies lefschetz type property respect multiplication generic linear forms implies particular inequalities dim integer set dim free curve say nearly free curve see details equivalent definitions many examples note curve free graded free rank isomorphism graded positive integers free integers called exponents satisfy relations total tjurina number singular points denotes tjurina number alexandru dimca gabriel sticlaru isolated plane curve singularity see instance case nearly free curve also exponents time verify free nearly free curve one mdr hence mdr free curve mdr nearly free curve follows theorem gives similar inequality rational cuspidal curve examples rational cuspidal curves given also free nearly free show possible values mdr actually occur fixed degree follows set mdr curve free resp nearly free resp see remark equation curve given explicitly one use computer algebra software instance singular order compute integer mdr computer algebra software course decide whether curve free nearly free see instance corresponding code website http however large degrees much quicker determine integer mdr proofs first recall setting used proof theorem key results walther theorem yield inequality dim dim milnor fiber associated plane curve subscript indicates eigenspace monodromy action corresponding eigenvalue exp exp assume rational cuspidal curve degree denote complement note topological euler characteristic given since cyclic covering complement follows also dim dim dim see instance prop chapter cor remark since clearly get dim dim freeness rational cuspidal plane curves proof theorem suppose odd say order prove view inequality enough show dim exp corresponds equation tells equivalent dim using proposition see also remark follows dim dim dim denote terms second page spectral sequences used compute monodromy action milnor fiber see details note also weaker result theorem enough proof construction spectral sequences follows one identification partial derivative respect follows dim dim dim dim mdr follows hence curve either free nearly free implies mdr explained previous section proof theorem reader convenience divide proof two steps proposition notation dim integer integer proof let note apply inequality follows dim dim exp exp since eigenvalue order prime power follows zariski theorem see proposition using get dim claim follows fact graded module enjoys duality property dim dim integer see lefschetz type property graded module see completes proof proposition proposition rational cuspidal curve degree mdr either free nearly free alexandru dimca gabriel sticlaru proof use formulas get following equality dim dim dim curve known first relation multiple occurs degree see lemma follows dim dim using obvious fact direct computation shows dim since follows hence dim proposition claim follows using characterization free resp nearly free curves given remains prove last claim theorem assume hand hence since mdr theorem get either mdr conclude theorem mdr conclude using theorem proof corollary prove first claim consider minimal possible value odd neither prime power form prime first hence otherwise equalites follows minimal values obtained first case get second get prove second claim use remark references artal bartolo dimca fundamental groups plane curve complements ann univ ferrara artal bartolo gorrochategui luengo conjectures free nearly free divisors singularities computer algebra festschrift greuel occasion birthday springer buchweitz conca new free divisors old commut algebra freeness rational cuspidal plane curves decker greuel singular computer algebra system polynomial computations available http dimca singularities topology hypersurfaces universitext new york dimca hyperplane arrangements introduction universitext springer dimca freeness versus maximal global tjurina number plane curves math proc cambridge phil soc dimca rational cuspidal plane curves local cohomology jacobian rings dimca popescu hilbert series lefschetz properties dimension one almost complete intersections comm algebra dimca sernesi syzygies logarithmic vector along plane curves journal dimca sticlaru exponents free nearly free projective plane curves rev mat complut dimca sticlaru computational approach milnor cohomology forum math dimca sticlaru free divisors rational cuspidal plane curves math res lett dimca sticlaru free nearly free curves rational cuspidal plane curves publ rims kyoto dimca sticlaru computing monodromy pole order milnor cohomology plane curves arxiv sticlaru computing milnor monodromy projective hypersurfaces fernandez bobadilla luengo nemethi rational unicuspidal projective curves whose singularities one puiseux pair real complex singularities sao carlos trends mathematics birkhauser fenske rational plane curves algebra flenner zaidenberg class rational plane curves manuscripta math koras palka conjecture duke math marchesi nearly free curves arrangements vector bundle point view moe rational cuspidal curves pages master thesis university oslo palka pelka planar rational cuspidal curves proc london math soc piontkowski number cusps rational cuspidal plane curves experiment saito theory logarithmic forms logarithmic vector fac sci univ tokyo sect math saito polynomials projective hypersurfaces weighted homogeneous isolated singularities sakai tono rational cuspidal curves type one two cusps osaka math sernesi local cohomology jacobian ring documenta mathematica alexandru dimca gabriel sticlaru simis homology homogeneous divisors israel math van straten warmt gorenstein duality almost complete application real singularities math phil walther jacobian module milnor generated invent math azur cnrs ljad inria france address dimca faculty mathematics informatics ovidius university mamaia constanta romania address gabrielsticlaru | 0 |
proper affine actions sufficient criterion dec ilia smilga december semisimple real lie group irreducible representation real vector space give sufficient criterion existence group affine transformations whose linear part free nonabelian acts properly discontinuously new criterion general one given author previous paper proper affine actions representations submitted available insofar also deals swinging representations conjecture actually necessary sufficient criterion applicable representations introduction background motivation present paper part larger effort understand discrete groups affine transformations subgroups affine group gln acting properly discontinuously affine space case consists isometries words classical theorem bieberbach says group always abelian subgroup finite index say group acts properly discontinuously topological space every compact set finite define crystallographic group discrete group gln acting properly discontinuously quotient space compact auslander conjectured crystallographic group virtually solvable contains solvable subgroup finite index later milnor asked whether statement actually true affine group acting properly discontinuously answer turned negative margulis gave nonabelian free group affine transformations linear part acting properly discontinuously hand fried goldman proved auslander conjecture dimension cases easy recently abels margulis soifer ams proved dimension see survey already known results margulis breakthrough soon followed construction counterexamples milnor conjecture counterexamples free groups recently danciger kassel dgk found examples affine groups acting properly discontinuously neither virtually solvable virtually free author focuses case free groups asking following question consider semisimple real lie group every representation real vector space may consider affine group affine group contain nonabelian free subgroup linear part acting properly discontinuously precisely values answer positive summary previous work question margulis original work gave positive answer acting abels margulis soifer generalized giving positive answer acting every integer showed later values answer acting negative author paper gave positive answer noncompact semisimple real lie group acting adjoint representation lie algebra recently gave smi simple algebraic criterion guaranteeing answer positive however criterion included additional assumption namely representation see definition fact necessary paper gives better sufficient condition answer positive condition works representations swinging author conjectures new condition fact necessary thus giving complete classification counterexamples order state condition need introduce classical notations basic notations remainder paper fix semisimple real lie group let lie algebra let introduce classical objects related defined instance knapp book though terminology notation differ slightly choose cartan involution corresponding cartan decomposition call space fixed points space fixed points call maximal compact subgroup lie algebra cartan subspace compatible maximal abelian subalgebra among contained set exp system positive restricted roots recall restricted root nonzero element restricted root space nontrivial form root system system positive roots subset contained note contrast situation ordinary roots root system need reduced addition usual types also type bcn call set simple restricted roots call open dominant weyl chamber corresponding closed dominant weyl chamber call centralizer lie algebra centralizer lie algebra clear well known see proposition resp sum restricted root spaces resp exp exp corresponding lie groups corresponding minimal parabolic subalgebras corresponding minimal parabolic subgroups restricted weyl group longest element weyl group unique element clearly involution see examples author previous paper working definitions cases psln finally representation real vector space call restricted weight space corresponding form space restricted weight representation form corresponding weight space nonzero remark reader unfamiliar theory noncompact semisimple real lie groups may focus case split cartan subspace actually cartan subalgebra maximal abelian subalgebra without additional hypotheses case restricted roots roots restricted weights weights restricted weyl group usual weyl group also algebra vanishes discrete group case split actually require full strength paper particular see section reduce ordinary translations statement main result let representation real vector space without loss generality may assume connected acts faithfully may identify abstract group linear group let vaff affine space corresponding group affine transformations vaff whose linear part lies may written stands group translations main result paper main theorem suppose satisfies following conditions exists vector representative exists subgroup affine group whose linear part zariskidense free nonabelian acts properly discontinuously affine space corresponding note choice representative matter precisely vector fixed remark sufficient prove theorem case irreducible indeed may decompose direct sum irreducible representations observe representation vector satisfies conditions least one vectors must satisfy conditions subgroup acts properly image canonical inclusion still acts properly shall start working arbitrary representation gradually make stronger stronger hypotheses introducing one need make construction work least partially motivated complete list places new assumptions introduced assumption necessary condition assumption assumption examples items show examples fall scope theorem item simplest example paper brings light example standard representation acting satisfies conditions see remark examples smi details theorem particular case theorem real semisimple lie group noncompact adjoint representation satisfies conditions see remark examples smi details main theorem particular case theorem take acting see example smi details group split set vectors precisely zero restricted weight space representation dimension zero weight space dimension spanned vector canonical basis representative given clearly element acts nontrivially vector remark compact representation satisfy conditions indeed case whole group condition fails noncompact groups interesting surprising indeed compact group acting vector space preserves quadratic form falls scope bieberbach theorem strategy proof central pillar paper consists following proposition template schema let two elements appropriate group regular pair geometry sufficient contraction strength case product still regular attracting geometry close iii repelling geometry close contraction strength close product asymptotic dynamics close sum prove three different versions statement slight variations especially concerning asymptotic dynamics involve different set definitions concepts scare quotes see table proximal version proposition linear version proposition main part points schema proposition asymptotic dynamics affine version proposition iii main part proposition asymptotic dynamics last two versions definitions question also depend parameter fixed given second line table give first intuition roughly geometry element eigenvectors contraction strength singular values asymptotic dynamics moduli word asymptotic explained gelfand formula relationship three results proximal version used stepping stone prove two versions result new similar lemmas proved exact statement proved author previous papers smi briefly recall section linear version also used stepping stone prove affine version completely straightforward way fact main part linear version reformulated proposition involved proof additivity margulis invariant affine version asymptotic dynamics hand linear version asymptotic dynamics already necessary prove main part affine version slight generalization result benoist cover section affine case exactly true translation part also involved group lives parameter regularity attracting geometry proximal case linear case affine case real vector space none subset simple restr roots sec extreme generically symmetric sec proximality def def def egs ygx essentiallya def def def egu ygx essentiallya def def def contr strength def def def repelling possiblyb asymp dynamics log spectral radius def roughlyc jordan projection def roughlyc jordan projection margulis invariant def see last point remark proximal case least two reasonable definitions asymptotic dynamics another one would logarithm spectral gap defined place technically statement schema holds projection given subset coordinates remaining coordinates get inequality however proposition allows circumvent problem also often case see example remark issue arise table possible meanings notions appearing schema references corresponding definitions affine version key result paper defining affine concepts keeps busy long time sections proving results also takes fair amount work sections takes five pages prove main theorem proof lot common author previous papers smi ultimately builds upon idea introduced margulis seminal paper let present highlights distinguishes paper previous works main difference lies treatment dynamical spaces long worked representations see definition could associate every element regular appropriate sense decomposition three dynamical subspaces stable acts eigenvalues respectively modulus modulus modulus general case longer possible need enlarge neutral subspace approximately neutral subspace eigenvalues subspace rather weak sense far still grow exponentially group eventually construct decomposition becomes forces completely change point view define approximate dynamical spaces purely algebraic way focusing dynamics would conjugate exponential reference element weyl chamber reason actually call ideal dynamical spaces see definition imposing additional condition asymptotic contraction see definition ensure ideal dynamical spaces become indeed approximate dynamical spaces linear version schema never explicitly appeared author previous papers far able simply present linear theory particular case affine theory even proposition smi seems almost identical proposition current paper relies fact affine versions properties however becomes untenable case swinging representations relationship affine contraction strength linear contraction strength becomes less straightforward see section led develop linear theory think propositions might interest independently remainder paper must however point results completely new particular case due benoist case arbitrary easy generalization also relies tools developed benoist experts field might seem first sight general case actually mentioned explicitly see remark smi central argument paper namely proof proposition completely overhauled though still highly technical much cleaner symmetric organization proof better reflects separation ideas involves even forget fact works general setting definitely improvement compared proof given smi plan paper section give algebraic results introduce notations related metric properties estimates results section recall definitions proximal versions key properties proximal version schema also section define linear versions key properties prove linear version schema section clarifies expands results mentioned section smi ultimately definitions ideas due benoist section choose element used define affine versions key properties generalizes section smi section present preliminary constructions involving introduce elementary formalism expresses affine spaces terms vector spaces part material borrowed sections smi section define affine versions key properties generalizes sections smi introduces several new ideas section prove main part affine version schema generalizes section smi section contains key part proof prove asymptotic dynamics part affine version schema terms approximate additivity margulis invariants generalizes section smi section proof based idea considerably rewritten short section uses induction extend results two previous sections products arbitrary number elements straightforward generalization section smi section section contains proof main theorem almost straightforward generalization section smi section acknowledgments grateful advisor yves benoist introduced exciting fruitful subject gave invaluable help guidance initial work project would also like thank bruno floch interesting discussions particular helped gain insight weights representations preliminaries subsection element give formula eigenvalues singular values linear maps arbitrary representation nothing reminder subsection smi subsection present properties restricted weights real finitedimensional representation real semisimple lie group mostly reminder subsection smi subsection give basic results theory parabolic subgroups subalgebras necessarily minimal development subsection smi subsection give notation conventions related metric properties estimates mostly borrowed author earlier papers eigenvalues different representations subsection express eigenvalues singular values given element acting given representation exclusively terms structure abstract group respectively jordan decomposition cartan decomposition reminder subsection smi result certainly even proposition jordan decomposition let exists unique decomposition product conjugate element hyperbolic conjugate element elliptic conjugate element unipotent three maps commute proof given example theorem note however latter theorem uses definitions hyperbolic elliptic unipotent element applied case adjoint representation state theorem definitions used need apply proposition theorem book proposition cartan decomposition let exists decomposition product exp moreover element uniquely determined proof classical result see theorem definition every element define jordan projection sometimes also known lyapunov projection written unique element closed dominant weyl chamber hyperbolic part jordan decomposition given conjugate exp cartan projection written element cartan decomposition given talk singular values need introduce euclidean structure going use special one lemma let real representation space exists quadratic form restricted weight spaces pairwise want reserve plain notation default representation fixed beginning section use notation encompass representation representations defined proposition proof see lemma succinct proof lemma smi detailed proof example adjoint representation form given see killing form cartan involution recall singular values map euclidean space defined square roots eigenvalues adjoint map largest smallest singular values give respectively operator norm reciprocal operator norm proposition let representation vector space let list restricted weights repeated according multiplicities let list moduli eigenvalues given list singular values respect euclidean norm makes restricted weight spaces pairwise orthogonal norm exists lemma given proof also completely straightforward see proposition smi properties restricted weights subsection introduce properties restricted weights real finitedimensional representations proposition actually general result coxeter groups corresponding theory ordinary weights see example chapter mostly reminder subsection smi addition lemma let enumeration set simple restricted roots generating every set restricted root otherwise every index define fundamental restricted weight relationship every abuse notation often allow write things subset satisfies tacitly identifying set set indices simple restricted roots inside following proposition subset denote weyl subgroup type fundamental domain action kind prism whose base dominant weyl chamber proposition take let fix let following two conditions equivalent vector satisfies system linear inequalities vector also convex hull orbit proof particular case well known see proposition general case easily reduced particular case see smi proposition proposition every restricted weight every representation linear combination fundamental restricted weights integer coefficients proof particular case proposition correction concerning proof see also remark proposition irreducible representation unique restricted weight called highest restricted weight element form restricted weight remark contrast situation weights highest restricted weight always multiplicity representation uniquely determined highest restricted weight proof corresponding result ordinary weights see theorem result restricted weights easily deduced former see proposition smi proposition let irreducible representation let highest restricted weight let restricted root lattice shifted set restricted weights exactly intersection lattice convex hull orbit restricted weyl group proof follows corresponding result non restricted weights see theorem passing restriction case restricted weights one inclusions stated proposition proposition every index exists irreducible representation space whose highest restricted weight equal positive integer multiplicity proof follows general theorem also stated lemma example sln may take exterior power standard representation generally split restricted weight spaces correspond ordinary weight spaces hence dimension may simply take every precisely fundamental representations lemma fix index restricted weights form every proof last remark section proof see lemma smi lemma assume lie algebra simple restricted root system diagram let irreducible representation let set restricted weights assume representation restricted weight least one nonzero restricted weight note case restricted weight image compact group simple means either compact trivial proof first let show contains least one restricted root proposition every restricted weight may written sum restricted roots define level smallest integer decomposition possible let element whose level smallest possible exists since reduced consider decomposition every indeed otherwise would restricted root could combine together produce decomposition length hence implies conv conv proposition follows still restricted weight level since assumption smallest nonzero level necessarily indeed restricted root see problem simple lie algebra acts transitively set restricted roots length since restricted root system diagram restricted roots length hence orbit whole set conclude parabolic subgroups subalgebras subsection recall theory parabolic subalgebras subgroups begin defining well levi subalgebra subgroup given type corresponding subset subgroup far follow subsection smi giving propositions relating different objects particular generalized bruhat decomposition lemma parabolic subgroup subalgebra usually defined terms subset set simple restricted roots find convenient however use slightly different language every subset corresponds facet weyl chamber given intersecting walls corresponding elements may exemplify facet picking element belong subfacet conversely every define corresponding subset parabolic subalgebras subgroups type conveniently rewritten terms follows definition every define parabolic subalgebras type levi subalgebra type corresponding parabolic subgroups levi subgroup type following statement proposition proof first note combining propositions get intersection walls weyl chamber containing remains show clearly lie algebra groups equal hence identity components also equal combining propositions follows similarly conclusion follows object closely related parabolic subgroups see corollary bruhat decomposition parabolic subgroups stabilizer weyl group definition set remark group also closely related set indeed follows immediately simple restricted root belongs corresponding reflection belongs conversely chevalley lemma see proposition reflections actually generate group thus actually thing defined proposition substitute example help understand conventions taking extreme cases lies open weyl chamber minimal parabolic subgroup following result shows important definition let set elements say example set restricted roots positive restricted roots form negative restricted roots form lemma generalized bruhat decomposition let real representation space let set restricted weights every subset set stabg stabw stabg stabw proof assume case analogous first step show stabg contains indeed group stabilizes every restricted weight space indeed take every every clearly definition hence stabilizes statement follows exp take element let apply bruhat decomposition see theorem exists element restricted weyl group may write elements minimal parabolic subgroup representative statement proved follows stabilizes iff hand clear every choice representative matter since seen kernel stabilizes conclusion follows following particular case see example theorem corollary bruhat decomposition parabolic groups identities proof take adjoint representation take easy show stabw tion applying lemma first identity follows applying lemma subset defined analogously second identity follows metric properties estimates subsection mostly introduce notational conventions already introduced beginning subsection smi beginning subsection linear map acting euclidean space write kgk kxk operator norm consider euclidean space introduce projective space metric setting every arccos kxkkyk vectors representing respectively obviously value depend choice measures angle lines shortness sake usually simply write actual vectors vector subspace radius shall denote may think kind conical neighborhood consider metric space let two subsets shall denote ordinary minimum distance inf inf opposed hausdorff distance shall denote haus max sup sup finally introduce following notation let two positive quantities parameters whenever write mean constant depending nothing write subscripts means course absolute constant least depend local parameters consider global parameters choice euclidean norms fixed whenever write mean time following result often useful lemma let map induces continuous map proof see lemma proximal maps section give definitions proximal versions concepts table state proximal version schema namely proposition contains new results let euclidean space definition proximal version regularity let let eigenvalues repeated according multiplicity ordered nonincreasing modulus define spectral radius use could confused representation say proximal eigenvalue modulus multiplicity equivalently greater proximal spectral gap definition proximal version geometry every proximal map may decompose direct sum line called attracting space hyperplane called repelling space stable every eigenvalue definition proximal version consider line hyperplane transverse optimal canonizing map pair map satisfying minimizing quantity max define optimal canonizing map proximal map optimal canonizing map pair let say pair formed line hyperplane resp proximal map optimal canonizing map equivalent angle bounded constant depends take two proximal maps say pair every one four possible pairs definition proximal version contraction strength let proximal map define proximal contraction strength spectral radius equal notations previous definition say proposition every positive constant following property take pair proximal maps suppose proximal iii constant indexed number proposition scheme stick throughout paper similar results appeared literature long time see lemma proposition lemma proof see proposition proof proposition smi proof iii remark wanted literally follow schema taking asymptotic dynamics mean logarithm spectral radius would need add point replace iii iii however estimate used sequel nevertheless true follows considering action dual space applying estimate iii contrary strong enough applications need also true follows plugging identity valid proximal obtained iii setting iii linear maps section define linear versions properties table state linear version schema basic ideas several definitions come however use slightly different point view benoist relies time proximal versions properties table using representations proxy hand clearly separate linear versions proximal versions establish correspondences linear proximal versions theorems subsection give definitions linear properties parametrized vector equivalently subset see discussion beginning subsection apply linear case affine case vector set vector chosen section subsection examine happens properties replace inverse subsection relate linear properties proximal properties prove propositions together comprise linear version schema definitions let fix definition linear version regularity say element every root vanish vanish either benoist calls elements elements type set see definition example condition vacuous elements elements often called loxodromic informally understood partially fact technically probably say instead definition linear version geometry let element let jordan decomposition let element realizing conjugacy exp called canonizing map define attracting denoted ygx class flag variety ygx repelling denoted ygx class flag variety ygx data attracting repelling pair benoist defines attracting repelling flags last sentence remark depending context sometimes map relevant consider sometimes inverse indeed map brings canonical position inverse map defines geometry starting canonical position formulas involve need check definitions depend choice indeed unique multiplication right element centralizer proposition latter equal ljd since contained turn contained definition say pair transverse intersection seen cosets nonempty exists element particular pair flags giving geometry element transverse compare definition given proposition map gives canonical diffeomorphism subset formed transverse pairs shall tacitly identify set transverse pairs also known open proof group acts smoothly manifold orbit point precisely set transverse pairs stabilizer mean third line end map corresponds benoist injection introduced near end definition every flag variety every flag variety fix distance coming riemannian metric distances shall denoted remark note every flag variety isomorphic compact indeed iwasawa decomposition see theorem maximal compact subgroup acts transitively means two riemannian metrics given flag variety always turns interested properties true multiplicative constant choice riemannian metric influence anything sequel introduce notion basically quantitative measure transversality every transverse pair flags constant smaller constant gets strongly flags transverse notion appears bundled together contraction strength concept single element pair generally family elements definition fix continuous proper map typical example map given max faithful representation euclidean norm representation space specific choice really important see remark practice indeed find convenient use specific map form see important property family preimages indexed nested family compact sets whose union exhausts set definition linear version note last statement also holds projections preimages onto may call projections set min justify notice every intersection coset compact nonempty continuous map reaches minimum also map still proper say element optimal representative coset reaches minimum say transverse pair terms identify coset map defined say element ygx ygx say pair elements ygx four possible pairs remark let explain choice function really important indeed suppose replace function another function property simply need replace every constant depends example take group real rank closed weyl chamber two facets interior taking makes everything trivial assume let identify group isometries hyperbolic space case element loxodromic fixes exactly two points ideal boundary flag variety canonically identifies ideal boundary opposite flag variety attracting flag ygx resp repelling flag ygx element corresponds attracting resp repelling fixed point infinity loxodromic isometry two flags transverse corresponding points distinct one possible choice function follows choose reference point let starting reaching ideal boundary points corresponding respectively may let reciprocal angle case pair corresponding points ideal boundary separated angle least looking finish subsection introducing following notion definition linear version contraction strength define linear xcontraction strength element quantity exp min measures far cartan projection walls weyl chamber except containing impact group inverse section examine happens properties introduced pass element inverse though slightly technical proof straightforward start observing every map benoist involution opposition compare formulas section first identity immediately follows definitions jordan projection second identity also follows definitions using fact every element restricted weyl group particular representative maximal compact subgroup see formulas proposition element inverse every element diffeomorphisms given iii pair pair constant depends every element remark starting section consider situations symmetric simplifies formulas proof immediate consequence show map note easily follows definitions obviously smooth hence map also smooth clearly equal inverse show desired identity note canonizing map canonizing map pay attention convention versus iii map descends diffeomorphism makes diagram commutative embedding proposition denotes map double arrow meant suggest graphically clearly map preserves transversality pairs every map maps preimage compact subset particular contained preimage immediate consequence remark point since choice metrics flag varieties arbitrary lose generality assuming diffeomorphisms defined actually isometries point iii chosen sufficiently natural way example defined may actually let products maps subsection start proving results link linear properties proximal properties via representations proposition regularity lemma geometry proposition proposition contraction strength proofs adapted section smi prove linear version schema consists two parts proposition main part result appear previous paper proposition asymptotic dynamics part gives conclusion proposition smi uses linear versions properties hypotheses natural rather affine versions proposition element every map proximal every constant following property let map every two statements essentially correspond respective left halves smi proof also essentially part given definition remark note since euclidean norms vector space equivalent estimate makes sense even though specify norm course proof shall choose one convenient recall notation shortcut thought kind exceptional set practice often empty see remark proof proposition proposition list moduli eigenvalues precisely dimension list restricted weights listed multiplicity reordering list may suppose highest restricted weight may also suppose indeed recall equal restricted root otherwise proposition restricted weight image restricted weight element weyl group also restricted weight convex combination two restricted weights belongs restricted root lattice shifted take since hypothesis restricted weight multiplicity lemma follows restricted weight form every index finally since definition every index follows every words among moduli eigenvalues largest exp exp second largest exp exp follows spectral gap equal exp exp exp conclusion follows immediately let constant small enough satisfy constraints appear course proof let fix let map satisfying hypotheses clearly enough show exp indeed definition side smaller equal start following observation every continuous map max bounded compact set constant depends choice norm made soon let optimal representative coset giving geometry let get let choose space representation acts euclidean form restricted weight spaces pairwise possible lemma applied simply quotient two largest singular values proposition giving singular values element given representation calculation analogous previous point exp desired estimate follows combining proposition let pair elements every pair pair proximal maps constant depends straightforward generalization proposition smi proof relies following two lemmas analogous lemmas smi however two lemmas follow formulated generally stronger statements useful order prove proposition lemma stabg vini stabg proof every let set restricted weights representation begin noting every stabg vini stabw follows lemma indeed singleton clearly corollary gives stabg vini easy exercise show obviously remove elements left precisely thus conclusion follows note every complement stabilizer may follow line reasoning following lemma identify projective space set vector lines projective space set vector hyperplanes lemma exist two smooth embeddings following properties every map ygx transverse pair every map map defined beginning statement appears little text together converse also true proof define maps following way every set vini maps injective lemma obviously continuous construction hence prove smooth embeddings sufficient prove injective differential identity coset also follows lemma differentiating show property essentially use identities exp vini exp follow inequality ranking values different restricted weights evaluated simple observation eigenspace image eigenspace eigenvalue property obvious definitions proof proposition let set transverse pairs compact hand function max continuous lemma takes positive values set hence bounded constant depending whenever transverse pair pairs conclusion follows lemma proposition every positive constant following property take pair maps suppose still ygh ygx remark include conclusion contraction strength namely statement true shall see moment never directly useful proof let fix constant small enough satisfy constraints appear course proof let pair maps satisfying hypotheses proposition every maps proximal proposition every pair depends choose follows proposition every choose sufficiently small maps contracting sufficiently contracting apply proposition thus may apply proposition obtain every map proximal hence proposition element moreover every let endow product product distance given max inequalities may combined together yield ygx ygh map introduced lemma since smooth embedding compact manifold particular bilipschitz map onto image hence also ygh yields first line conclusion get second line simply note proposition replace pair satisfies hypotheses pair applying pair get remark conclusion follows proposition every positive constants following property take pair elements proof proof completely analogous proof proposition smi see figure picture explaining proposition corollary let also give palatable though slightly weaker reformulation corollary every exists positive constant following property pair satisfying hypotheses proposition conv conv denotes convex hull vector satisfying proof proof proof corollary smi note require vector lie closed dominant weyl chamber even though practice close vector remark first sight one might think putting together lemma recover result even something stronger however case fact recover particular case see remark smi explanation remark though shall use interesting particular case obviously statement simply reduces relationship jordan cartan projections proposition corollary also hold replace immediately implies compare remark involved similar construction proximal case figure picture represents situation acting chosen usual abuse notations choice random satisfies conditions required starting section also corresponds example smi group generated single reflection proposition states lies shaded trapezoid corollary states lies thick line segment case lies definition dominant open weyl chamber shaded sector choice reference jordan projection remainder paper fix irreducible representation finitedimensional real vector space moment may representation course paper shall gradually introduce several assumptions namely assumptions ensure satisfies hypotheses main theorem call set restricted weights call resp set restricted weights take positive resp negative zero nonnegative nonpositive value goal section study sets choose vector corresponding sets nice properties generalizes section smi fact sets property matters terms really care class respect following equivalence relation definition say type obviously implies spaces coincide well equivalence relation partitions finitely many equivalence classes remark every equivalence class obviously convex cone taken together equivalence classes decompose cell complex example adjoint representation two dominant vectors type one generic type corresponding see example smi details two examples pictures motivation study five sets allow introduce reference dynamical spaces see subsection subsections define two properties want satisfy subsection basically consists examples may safely skipped generically symmetric vectors start defining property generically symmetric generalizes generic symmetric vectors defined subsections smi one goals ensure set small possible first attempt say element generic remark indeed generic case happens soon avoids finite collection hyperplanes namely kernels nonzero restricted weights fact vector generic equivalence class open generic equivalence class connected component containing set generic vectors otherwise equivalence class always contained proper vector subspace fact generic actually provided following condition met assumption assume restricted weight equivalently dim remark proposition case highest restricted weight combination restricted roots lose generality assuming assumption necessary condition main theorem also assumption see hold indeed nonzero vector fixed particular fixed means belongs zero restricted weight space since want construct group certain properties must particular stable inverse identity encourages examine action definition say element symmetric invariant ideally would like reference vector symmetric generic unfortunately always possible indeed every restricted weight happens invariant necessarily vanishes every symmetric vector let set restricted weights definition say element generically symmetric symmetric terms element generically symmetric generic possible still symmetric extreme vectors besides also interested group type stabilizer type obviously contains goal subsection show every equivalence class actually choose way groups coincide generalizes subsection smi example example smi acting group corresponding generic group take generic respect also respect adjoint representation terms open weyl chamber group trivial however take element diagonal wall weyl chamber indeed definition call element extreme satisfies following property type remark equivalent definition possibly enlightening possible show vector extreme lies every wall weyl chamber contains least one vector type words vector extreme furthest possible corner equivalence class weyl chamber last statement never used paper left proof proposition every generically symmetric exists generically symmetric type extreme straightforward generalization proposition smi proof similar proof construct element type whole group stabilizer simply average action group set multiplication positive scalars change anything written sum rather average ease manipulation let check required properties let show still symmetric since belongs weyl group induces permutation hence swaps sets definition stabw stabw hence normalizes obviously map commutes everything also normalizes conclude definition every type since equivalence class convex cone sum also type particular still generically symmetric construction whenever type conversely fixes type type extreme remains show every obviously otherwise since extreme follows even type means exists restricted weight least one two inequalities strict particular since definition multiple follows reasoning applies vector type hence never vanishes equivalence class since hypothesis since equivalence class connected conclude note practice set extreme generically symmetric take limited number values see remark smi simplifying assumptions subsection discuss constructions paper may simplify particular cases results never reused paper provide proofs however helpful reader interested one particular representation likely fit least one cases outlined definition say representation limited every restricted weight multiple restricted root abundant every restricted root restricted weight awkward neither limited abundant example adjoint representation limited abundant always standard representation limited simple seems finitely many representations abundant additionally restricted root system diagram lemma says representations except trivial one abundant among simple groups seems awkward representations occur restricted root system type bcn least equal groups phenomenon common see example specific examples swinging representations occur nontrivial automorphism dynkin diagram among simple groups happens restricted root system type bad news groups representations finitely many swinging simplest example acting see example smi thus representation simple group swinging awkward time however may happen group simply take tensor product swinging representation awkward representation subsequent constructions rely choice generically symmetric vector extreme actually type matters say choice type particular cases remark limited representation one type generically symmetric vector ignore dependence abundant representation every generically symmetric vector lies particular terms need theory nonminimal parabolic subgroups developed section iii cases get type respect adjoint representation subset depend choice fact every nonawkward representation every generically symmetric extreme identity note one inclusions namely obvious holds every representation inclusion however may fail awkward representations value may depend choice see example representation clearly vector generically symmetric generic also symmetric constructions related definition remainder paper fix vector closed dominant weyl chamber generically symmetric extreme section introduce preliminary constructions associated vector subsection introduce reference dynamical spaces associated find stabilizers generalizes subsection smi subsection consists entirely definitions introduce elementary formalism expresses affine spaces terms vector spaces use define affine reference dynamical spaces basically repeat subsection smi subsection try understand regularity means affine context introduce two different notions regularity establish relationships previously used one two notions see definition smi subsection consists entirely examples contains counterexamples help understand statements previous subsection made stronger reference dynamical spaces definition define following subspaces reference expanding space reference contracting space reference neutral space reference noncontracting space reference nonexpanding space terms direct sum restricted weight spaces corresponding weights similarly spaces precisely dynamical spaces associated map exp acting defined section smi see example smi case adjoint representation acting standard representation remark note assumption zero restricted weight space always nontrivial let determine stabilizers subspaces proposition stabg stabg stabg stabg generalizes proposition smi proof somewhat involved proof lemma corollary enough show stabw stabw stabw stabw indeed since clearly obviously subset complement always stabilizer since stable hand since extreme stabw stabw definition sufficient show equivalently stabilizer consequence lemma lemma every element restricted weyl group stabilizes every set restricted weights particular sets stabilizer indeed every stabilizes every stabilizes satisfies particular proof let decompose sum three pieces respective cartan subspaces every simple summand restricted root system nonempty simplylaced diagram acts nontrivially restriction acts trivially also decompositions restricted root system every system positive restricted roots restricted weyl group prove two contrasting statements one hand claim indeed let decompose applying lemma every simple summand find since intersect generically symmetric set even intersect definition leave restricted root invariant since dominant deduce follows since stabilizes assumption thus fixes precisely means required hand claim equality holds since generically symmetric indeed take let decompose component vanishes definition definition component also vanishes combining conclude whenever element satisfies actually fixes every element take take element since may distinguish two cases either follows previous statement follows previous statement hand know thus extended affine space let vaff affine space whose underlying vector space definition extended affine space choose point vaff take origin call vector space formally generated point set extended affine space corresponding hope extended affine space group corresponding cartan space occur sufficiently different contexts reader confuse vaff affine hyperplane height space corresponding vector hyperplane vaff definition linear affine group affine map linear part translation vector defined vaff extended unique way linear map defined given matrix identify abstract group group corresponding affine group subgroup definition affine subspaces define extended affine subspace vector subspace contained correspondence extended affine subspaces affine subspaces vaff dimension one less extended affine subspace denoted denote space linear part corresponding affine space vaff definition translations abuse terminology elements normal subgroup still called translations even though shall see mostly endomorphisms formally transvections vector denote corresponding translation definition reference affine dynamical spaces give name vector extensions affine subspaces vaff parallel respectively passing origin set reference affine noncontracting space reference affine nonexpanding space reference affine neutral space obviously affine dynamical spaces sense smi corresponding map exp seen element identifying stabilizer decomposition gives hint introduce spaces see remark smi detailed explanation definition affine jordan projection finally extend notion jordan projection whole group setting conditions jordan projection subsection introduce two new notions regularity element given conditions jordan projection also determine relationships definition affine version regularity say element terms asymptotically contracting along course really properties abuse terminology say vector respectively asymptotically contracting along satisfies respectively remark rigorously talk definition depends choice however author feels dependence significant enough constantly mentioned way see particular point following example example representation limited three properties asymptotically contracting along become equivalent includes standard representation see example smi representation limited abundant time adjoint representation three properties actually reduce ordinary since generic means representation either limited abundant notion depend choice indeed uniquely determined since assumption generically symmetric general see example notion technically depend choice representation possible show set restricted weights centrally symmetric equivalently invariant author must however admit knows better proof fact complete enumeration representations follows whenever asymptotically contracting along asymptotic contraction along equivalent type sense definition condition considered smi general asymptotic contraction along stronger condition type simple counterexample take acting see smi example fancier counterexample see example new properties affine analogs however useful slightly different contexts purpose assuming element ensure affine ideal dynamical subspaces introduced section welldefined relatively weak property merely required avoid finite collection hyperplanes property use often particular one makes affine version schema work purpose assuming element asymptotically contracting along roughly ensure acts correct dynamics ideal dynamical subspaces details see discussion following definition latter motivation asymptotically contracting terminology see proposition property explicitly part hypotheses schema since implied contraction strength see proposition however often need extra assumption intermediate results actually often need assume asymptotically contracting latter property verified soon jordan projection points direction sense close enough precisely remark set asymptotically contracting along convex cone stable positive scaling sum open indeed intersection finitely many open vector intersection contains particular nonempty identity obviously still holds affine case intersection precisely equal asymptotically contracting along iii since open meets closed set also meets interior thus intersection nonempty open convex cone latter set might seem relevant point useful final proof paper note distinction set introduced set defined smi equivalence class however true case two sets coincide seen example relationship two notions proposition let asymptotically contracting along asymptotically contracting along remark mentioned example may remove assumption general true see example counterexample proof relies following lemma moment need weak version strong version useful later lemma every exists pair restricted weights weak version strong version remark long provides version lemma namely actually take general true see example proof let reintroduce decomposition proof lemma together notations went along let distinguish three cases case never occurs indeed symmetry fixes pointwise applying lemma simple summand containing find may simply take finally suppose definition proposition means stabilize terms exists restricted weight compare slightly weaker since restricted weight proposition number integer one hand hand hence positive proposition every element sequence restricted weight let last element sequence still lies construction taking get weak version proof works strong version except last case happen since generically symmetric particular last since restricted weight proposition form also restricted weight latter average former actually thus may take proof proposition let let restricted weights constructed lemma weak version suffices since asymptotically contracting along hence suppose asymptotically contracting along previous point already know remains check distinguish two cases since asymptotically contracting along min asymptotically contracting along since max hence counterexamples give three examples pathological behavior explain constructions paper simplified general three fairly reader wishes focus behavior probably safe skip subsection example two examples awkward neither limited abundant representations explanation provide counterexample version lemma first one development example smi take split group hence restricted root system coincides ordinary root system type notations simple restricted roots take representation highest weight restricted weights form multiplicity restricted weights form multiplicity zero restricted weight multiplicity total dimension particular generic obviously may take hand longer case note three different types generic elements extreme representatives type given cases removal excludes consideration case however deal wit possible choice take root system ordinary restricted type let call restricted roots first factor restricted roots second factor let order restricted roots lexicographical order factor gives unique ordering combined root system simple restricted roots take representation highest weight corresponds standard action set restricted weights mean set cardinal set contains negative simple restricted roots six different types generic elements using coordinate system extreme representatives three types given nice case need deal two possibilities take need deal take three types obtained exchanging example counterexample statement remark representation necessarily swinging choice elements asymptotic contraction along imply let reader check details take root system ordinary restricted type involution maps notations appendix let representation highest weight representation dimension distinct restricted weights take vector generically symmetric extreme respect fact vector asymptotically contracting along however restricted weight vanishes properties affine maps goal section define affine versions remaining properties generalizes subsections smi several constructions become considerably complex subsection define ideal dynamical spaces associated map data two affine version geometry remaining ones deduced two generalizes time subsections smi using different approach subsection study action map affine ideally neutral space turns generalizes subsection smi subsection one small difference see remark subsection introduce groupoid canonical identifications possible affine ideally neutral spaces use define translation part asymptotic dynamics margulis invariant almost straightforward generalization subsection smi subsection subsection define study affine versions contraction strength mostly follow second half subsection smi subsection subsection study relationships affine linear properties generalizes subsection smi subsection weaker complicated statement ideal dynamical spaces goal subsection define ideal dynamical spaces associated map definition start following particular case definition take element may write translation vector jordan decomposition see proposition linear part say canonical form exp exp canonical form define ideal dynamical spaces reference dynamical spaces introduced especially useful shown following property proposition map canonical form stabilizes eight reference dynamical spaces namely proof first note commutes definition hyperbolic part canonical form equal exp hence belongs centralizer ljd since ljd finally proposition follows group hence particular stabilizes spaces subgroup spaces subgroup space subgroup action affine map subspace coincides action linear part also stabilizes five subspaces finally know since canonical form contained hence hence also stabilizes general define ideal dynamical spaces inverse images reference dynamical spaces canonizing map conjugate canonical form however ensure translation part need regular respect precisely proposition let map exists map called canonizing map canonical form two maps differ element key point proof following lemma lemma canonical form linear map induces invertible linear map quotient space proof fact quotient map follows proposition indeed hence stabilizes subspace let show quotient map invertible eigenvalues restriction hyperbolic part exp subspace real positive since different since elliptic unipotent parts commute hyperbolic part eigenvalues modulus follows eigenvalues restriction subspace stable proposition different particular restriction invertible conclusion follows proof proposition let canonizing map exists proposition canonical form claim suitable choice map canonizing map indeed already know canonical form hand lemma need surjectivity quotient map may choose way finishes proof assume already canonical form enough show still canonical form element indeed let map let write translation part linear part proposition fact commutes implies ljd translation part lemma need injectivity quotient map definition affine version geometry map introduce following eight spaces called ideal dynamical spaces ideally expanding space associated ideally contracting space associated ideally neutral space associated ideally noncontracting space associated ideally nonexpanding space associated affine ideally noncontracting space associated affine ideally nonexpanding space associated affine ideally neutral space associated canonizing map idea behind definition suppose first representation nonswinging actually generic jordan projection sufficiently close actually type similarly whenever happens ideal dynamical spaces coincide actual dynamical spaces defined section smi generic longer true still want assume sufficiently close best ensure asymptotically contracting along case get moduli eigenvalues much larger moduli eigenvalues much smaller might differ somehow remain moduli eigenvalues far let finally check definition makes sense prove extra properties along way proposition definitions depend choice datum uniquely determines spaces iii datum uniquely determines spaces data uniquely determine eight ideal dynamical spaces spaces play crucial role affine analogs attracting repelling flags defined section see remark explanation proof immediate corollary proposition following relationships first two lines immediately imply iii points note eight groups contain point follows proposition point follows using identity let investigate action map affine ideally neutral space goal subsection prove almost translation proposition fix euclidean form satisfying conditions lemma representation definition call affine automorphism induced element let explain justify terminology proposition let set fixed points let complement element words affine automorphisms preserve directions act translation component may think kind screw displacement superscripts respectively stand translation affine proof need show every element fixes pointwise leaves invariant globally former true definition latter let prove separately elements elements hypothesis elements preserve form since leave invariant space also leave invariant complement let introduce notation symbol intended represent idea avoiding zero space decomposes orthogonal sum since obviously similarly orthogonal sum clearly sum restricted weight spaces invariant moreover every element actually fixes every element particular leaves invariant subspace conclusion follows remark note contrast case proposition smi longer act isometries space comprises possibly nontrivial space acts nontrivially phenomenon reason reasoning used smi prove additivity margulis invariants could reused without major restructuring claim map acts affine ideally neutral space quasitranslations proposition let map let canonizing map restriction conjugate let actually formulate even general result another application next subsection lemma map stabilizing acts quasitranslation proof begin showing element acts way element recall definition thus want show every restricted root restricted weight sufficient show case sum longer restricted weight otherwise would elements hence would fixed since generically symmetric would mean also fixed impossible conclude fashion proof lemma smi passing two lie algebras first identity components whole groups using proposition stabilizers proof proposition proposition follows immediately taking indeed proposition canonized map stabilizes see example smi specific examples nonswinging case would like treat bit like translations need least nontrivial space impose following condition assumption representation dim precisely condition main theorem canonical identifications margulis invariant main goal subsection associate every map vector called margulis invariant see definition two propositions lemma lead definition important well often used subsequently proposition shown geometry map namely position ideal dynamical spaces entirely determined pair spaces fact pairs spaces play crucial role let begin definition connection observation made become clear proposition definition define parabolic space subspace image either matter one since symmetric element define affine parabolic space subspace image element equivalently subspace affine parabolic space iff contained linear part parabolic space say two parabolic spaces two affine parabolic spaces transverse intersection lowest possible dimension equivalently sum whole space see example smi proposition pair parabolic spaces resp affine parabolic spaces verse may sent resp element resp particular map pair definition transverse pair affine parabolic spaces proposition well proof similar claim proposition smi proof let prove linear version affine version follows immediately let pair parabolic spaces definition may write let apply bruhat decomposition map may write belong minimal parabolic subgroup element restricted weyl group technically representative thereof let stabilizes since thus transverse transverse lemma implies stabilizes hence stabilizes ment means thus required conversely equalities hold obviously transverse remark follows proposition set parabolic spaces identified flag variety identifying every parabolic space coset every element identification matches ideally expanding space attracting flag composing bijection defined proposition may also identify set opposite flag variety matches ideally contracting space repelling flag using bruhat decomposition see corollary may show two parabolic spaces transverse corresponding pair cosets transverse sense definition similarly follows principle identify set affine parabolic spaces affine flag variety would however require cumbersome notations linear case natural order make things representationindependent affine case however privileged representation anyway namely decided translating everything abstract language worth trouble reader may exercise wish consider transverse pair affine parabolic spaces intersection may seen sort abstract affine neutral space introduce family canonical identifications spaces identifications however inherent ambiguity defined proposition let pair transverse affine parabolic spaces map gives restriction identification intersection unique mean pair generalizes corollary proposition smi remark note obtained another way intersection two affine parabolic spaces identification general longer even could also element weyl group involved proof existence map follows proposition let two maps let map construction stabilizes follows lemma restriction let explain call identifications canonical following lemma seemingly technical actually crucial tells identifications defined proposition commute projections naturally arise change one parabolic subspaces pair fixing lemma take affine parabolic space let two affine parabolic spaces transverse let resp element sends pair resp two maps exist proposition let inverse image map image unique proposition let projection parallel map defined commutative diagram space sense abstract linear expanding space corresponding abstract affine noncontracting space precisely map projection statement generalizes lemma lemma smi proof proof exactly proof lemma smi let map already know acts affine ideally neutral space canonical identifications introduced allow compare actions different elements respective affine ideally neutral spaces acting space however catch since identifications canonical lose information happens translation part along remains formally make following definition let denote projection onto parallel definition let map take point affine space vaff map canonizes define margulis invariant vector call margulis invariant vector depend choice indeed composing change image see proposition detailed proof claim informally margulis invariant gives translation part asymptotic dynamics element linear part given linear case plays central role paper quantitative properties subsection define study affine versions contraction strength less follow second half subsection smi subsection endow extended affine space euclidean norm written simply given norm defined lemma subspaces pairwise orthogonal definition affine version take pair affine parabolic spaces optimal canonizing map pair map satisfying minimizing quantity max proposition compactness argument map exists iff transverse define optimal canonizing map map optimal canonizing map pair let say pair affine parabolic spaces resp map canonizing map take two maps say pair degenerate every one four possible pairs agi agj point definition lot calculations treat pair spaces perpendicular err multiplicative constant depending remark set transverse pairs extended affine spaces characterized two open conditions course transversality spaces also requirement space contained mean degeneracy failure one two conditions thus property pair actually encompasses two properties first implies spaces transverse quantitative way precisely means continuous function would vanish spaces transversely bounded example function smallest non identically vanishing principal angles defined proof lemma second implies close space sense purely affine terms means affine spaces vaff vaff contain points far origin conditions necessary appeared previous literature however initially treated separately idea encompassing concept seems first introduced author previous paper definition affine version contraction strength let map say along kxk kyk note definition spaces always dimensions respectively hence nonzero define affine contraction strength along smallest number along words notion closely related notion asymptotic contraction proposition map along also asymptotically contracting along map asymptotically contracting along lim proof let claim asymptotically contracting along satisfies inequality denotes spectral radius indeed porism proposition obtain spectrum spectrum accounts affine extension assumption eigenvalue already contained spectrum linear part may actually ignore part claim follows conclusion follows facts every linear map gives log log log also known gelfand formula gives comparison affine linear properties goal subsection prove proposition element relates quantitative properties introduced corresponding properties linear part given lemma case adjoint representation lemma smi straightforward generalization case general case however points generalize obvious way statement iii holds weaker form need consider linear contraction strength affine contraction strength basically forced develop purely linear theory section systematic way rather presenting particular case affine theory previous papers order able compare affine contraction strength linear one begin expressing former terms cartan projection following generalization lemma smi formulated slightly general way original statement essentially inequality without absolute value proposition every constant following property let map log min max recall set restricted weights take nonnegative values complement make sense estimate keep mind quantity log typically positive also note minimum term certainly nonpositive proof first let optimal canonizing map let easy see difference bounded constant depends hence lose generality replacing clearly enough show canonical form equality min max log straightforward generalization smi proved exactly fashion mutatis mutandis stepping stone first need extension point lemma lemma smi giving bound affine contraction strength linear part seen element usual embedding lemma map proof similar one given proof max max justify last equality note canonizing map space contains subspace nonzero assumption clearly eigenvalues restricted subspace modulus hence may state prove appropriate generalization lemma lemma smi linking three points purely linear properties affine properties proposition every positive constant following property let pair elements case pair sense definition constant depends iii moreover assume actually proof remark lose generality specifying particular form map introduced definition let define way consistent definition affine case namely using particular case formula max working representation euclidean norm introduced beginning section precisely restriction euclidean norm introduced lemma clearly canonizing map canonizing map since obviously conclusion follows first note lemma apply proposition passing exponential exp max exp min key point lemma every may find whose difference immediately follows exp max exp min iii proceed two steps establish first straightforward relies strong version lemma fact place strong version needed definition let optimal canonizing map since images respectively equal orthogonal follows max clearly hand since follows methods similar proof proposition essentially proposition may rewrite exp max note combining proposition lemma exp max exp min take equal inverse implicit constant inequality may assume particular since obviously remains true take restricted weights introduced lemma implies exp exp max max combining two estimates conclusion follows products maps goal section prove proposition main part affine version schema general strategy section smi section reduce problem proposition considering action suitable exterior power rather spaces section start proving following result whose role smi played proposition even though new version involves slightly different inequalities proof quite similar proposition every positive constant following property take pair maps asymptotically contracting along proof let let pair maps constant specified later first thing note since property depends linear part lemma proposition reduce problem case proposition gives max min log taking small enough may assume max course similar estimate holds max let vector defined corollary deduce every pair restricted weights max adding together three estimates find every pair hand says conv take proposition follows stabilizes hence still thus difference takes positive values every point orbit hence also takes positive values every point convex hull particular conclude indeed asymptotically contracting along establish correspondence affine proximal properties introduce integers dim dim dim dim dim every may define exterior power euclidean structure induces canonical way euclidean structure lemma let map asymptotically contracting along prox imal attracting resp repelling space depends nothing resp every whenever pair maps also asymptotically contracting along pair proximal maps iii every constant following property every map also asymptotically contracting along addition recall stand respectively proximal affine contraction strengths see definitions two subspaces similar lemma lemma smi additional complication needing distinguish asymptotic contraction proofs points iii however still remain similar corresponding proofs chose reproduce particular order correct small mistake erroneously claimed could take stemmed confusion canonized version proof let map asymptotically contracting along proposition already noted proof proposition follows every eigenvalue smaller modulus every eigenvalue let eigenvalues acting counted multiplicity ordered nondecreasing modulus hand know eigenvalues counted multiplicity exactly products form two largest modulus follows proximal expression follows immediately considering basis trigonalizes take pair let optimal canonizing map pair agi agi proposition vgj euclidean structure chosen orthogonal hence orthogonal hyperplane previous point follows canonizing map pair similarly conclusion follows iii let let map also asymptotically contracting along let optimal canonizing map let clear sufficient prove statement let resp singular values restricted resp since spaces stable orthogonal get singular values whole space note however unless list may fail sorted nondecreasing order hand know singular values products distinct singular values since orthogonal may analyze singular values separately subspace know singular value corresponding equal deduce equal maximum remaining singular values particular larger equal hand largest eigenvalue det eigenvalues equivalently sorted nondecreasing modulus second equality holds hence asymptotically contracting along eigenvalues sorted correct order follows first estimate looking take small enough may suppose means singular values indeed sorted correct order hence actually largest singular value inequality becomes equality second estimate follows see lemma also need following technical lemma generalizes lemma lemma smi lemma constant following property let two affine parabolic spaces haus form pair course constant arbitrary could replace number larger proof proof exactly proof lemma mutatis mutandis proposition every positive constant following property take pair maps suppose still vgh iii points iii generalization proposition proposition smi proof similar together give main part affine version schema point generalizes corollary corollary smi statement stronger involves linear contraction strength longer obtained corollary affine version instead must proved independently using proposition remark note point involves point simply written instead fact linear case distinction becomes irrelevant linear contraction strength proposition however affine contraction strengths different proof proposition let fix constant small enough satisfy constraints appear course proof let pair maps satisfying hypotheses first note assume proposition ensures asymptotically contracting along let prove proposition follows hence proposition implies pair satisfies hypotheses proposition hence remember remark attracting resp repelling flag map carries information linear ideally expanding resp contracting space even precisely may deduce proposition orbital map orbit grassmanian descends smooth embedding flag variety since flag variety compact embedding particular desired inequalities follow take proposition tells asymptotically contracting along may also apply proposition pair hence also asymptotically contracting along proposition deduce remainder proof works exactly like proof proposition proposition smi namely applying proposition maps let check satisfy required hypotheses lemma proximal lemma pair choose follows lemma iii choose sufficiently small sufficiently contracting apply proposition thus may apply proposition remains deduce conclusions conclusions proposition proposition using lemma iii get agh shows first line proposition applying proposition instead get way second line proposition let optimal canonizing map pair hypothesis take sufficiently small two inequalities shown together lemma allow find map follows composition map last inequality namely proposition iii deduced proposition using lemma iii additivity margulis invariant goal section prove propositions explain margulis invariant behaves group operations respectively inverse composition two propositions key ingredients proof main theorem proposition generalization proposition proof similar fairly straightforward compare also results section proposition generalization proposition gives asymptotic dynamics part affine version schema proof takes majority section estimate idea introduce two vectors mgh mgh definition mgh mgh first find intermediate vector called prove close lemma prove mgh close intermediate vector lemma proposition every map remark note definition element space definition set fixed points straightforward deduce invariant hence induces linear involution depend choice representative proof first note canonizing map canonizing map indeed assume exp canonical form since definition exchanges dominant weyl chamber negative since symmetric action preserves remains show precisely representative commutes use fact group defined quotient also equal quotient see formulas hence recall orthogonal decomposition let show three components invariant obvious since invariant follows remark obviously case definition group acts trivially construction acts orthogonal transformations indeed euclidean structure chosen accordance lemma hence orthogonal complement also invariant desired formula immediately follows definition margulis invariant proposition every positive constants following property let pair maps along basic idea proof smi however proof lemma smi key point since factors introduced construct diagram namely ggh linear parts automatically bounded norm general case last deduction fails see remark issue forced completely reorganize proof new proof though still technical elegant symmetric instance got rid lopsided diagram confusing series also structured cleanly separates algebraic part comprising lemma involving combination canonical projections bounded lemma analytic part comprising lemma corollary lemma involving projections spaces close angle controlled contraction strengths hence introduce small error proof let choose constant small enough satisfy constraints appear course proof remainder section fix pair maps along following remark used throughout proof remark may suppose pairs agh ahg indeed recall proposition similar inequalities interchanged hand hypothe sis choose sufficiently small four statements follow lemma proof proposition continued take proposition ensures estimate decompose induced map agh product several maps begin decomposing product factors commutative diagram indeed since conjugate ahg ahg agh next factor map agh map better known commutative diagram projection onto parallel commutes invariant decompose every diagonal arrow last diagram two factors two maps introduce notation call resp projection onto resp ahg still parallel justify definition must check similarly ahg supplementary indeed remark transverse hence proposition proposition supplementary thus commutative diagrams finally new comparison smi would like replace projections define projection onto parallel obviously induces bijection agh define projection onto parallel vhg obviously induces bijection ahg ahg reason lemma actually commute canonical identifications see remark details make decompositions last three steps repeated instead way adapt second step straightforward third step factor agh agh fourth step project respectively along along vgh combining four decompositions get lower half diagram left expansion leave drawing full diagram especially brave readers let interpret maps endomorphisms choose optimal canonizing maps respectively pair pair ahg allows define ggh hgh maps make whole diagram commutative let define mgh ggh mgh hgh ggh hgh diagram vaff affine space parallel passing origin since conjugate elements defined obvious way whose restrictions ggh hgh stabilize spaces thus quasiand lemma translations follows values mgh mgh depend choice compare definition margulis invariant definition immediately follows ggh hgh mgh mgh thus enough show kmgh kmgh note vectors mgh mgh elements maps ggh hgh extended affine isometries acting whole subspace shall prove estimate proof estimate analogous proceed two steps first introduce vector show lemma differs mgh bounded constant second show lemma close two lemmas together imply conclusion remark contrast actual margulis invariants values mgh mgh depend choice canonizing maps choosing canonizing maps would force subtract constant former add latter remark fourth decomposition step namely makes whole proof much cleaner smi map called almost quasitranslation lemma smi current proof map decomposes two pieces much easier deal bounded like thus falls algebraic part almost identity lemma thus stays analytic part lemma kmg proof lemma maps let show norms bounded constant depends obviously implies conclusion let start definition projection onto parallel actually orthogonal projection hence norm bounded remark similarly bounded deal note inverse deduce conclude previously similarly conclude way lemma estimates agh ahg hold soon respective sides smaller constant depending corollary iii light remark corollary seen simpler version lemma smi indeed old corresponds new old corresponds alone proof points immediately follow lemma combined proposition provided take small enough points iii simply apply lemma pair easy check pair still satisfies hypotheses proposition get case second inequality follows proposition first inequality application lemma licit provided small enough since proposition propagates required upper bound proof lemma might seem slightly technical easy intuition behind essentially idea jump back forth two spaces almost coincide going times directions whose angle two spaces shallow end far starting point proof lemma remark know conjugating everything map thus sufficient show every kxk mean inverse bijection let first estimate quantity begin let push everything forward map writing kyk sin remark know hence may pull everything back conclude kxk agh agh kxk let estimate quantity introduce notation define unique linear automorphism satisfying inequalities easy deduce norms bounded constant depends obviously hence tan agh since bounded since side small enough may assume smaller fixed constant say every obviously tan follows agh since bounded deduce kzk remains estimate kzk terms kxk kzk kxk kxk agh kxk taking agh small enough may assume kzk conclude agh kxk adding together desired inequality follows proof completely analogous lemma kmgh proof somewhat similar proof second half lemma smi proof recall mgh ggh element affine space let origin vaff intersection line affine space vaff terms definition element every extended affine map vaff let take may write mgh iii since ggh middle term definition equal mgh iii iii kkok corollary remains estimate set let calculate norm vector kyk justify third line remember seen proof lemma four maps bounded kyk corollary proposition iii joining together conclusion follows margulis invariants words already studied contraction strengths proposition margulis invariants proposition behave take product two mutually sufficiently contracting maps goal section generalize results words arbitrary length given set generators straightforward generalization section smi section definition take generators consider word length generators inverses every say reduced every say cyclically reduced reduced also satisfies proposition every positive constant following property take family maps satisfying following hypotheses every pair taken among maps except course form every take nonempty cyclically reduced word every constant introduced proposition proof proceeds induction proposition proposition providing induction step proof proof exactly proof proposition mutatis mutandis let present one small improvement proof relies following lemma lemma every cyclically reduced word decomposed product two cyclically reduced subwords nonempty proved contradiction somewhat obscure way let reformulate proof constructive hopefully comprehensible proof may take smallest positive index index always exists equal since word reduced first subword actually form word form automatically cyclically reduced soon reduced cyclically reduced construction construction group prove main theorem reasoning similar section almost identical section smi main difference substitution instead particular requires invoke proposition equivalent smi final proof also since developed purely linear theory systematic way section relationship linear properties affine properties becomes clearer particular second bullet point final proof let recall outline proof begin showing lemma take group generated family sufficiently contracting maps suitable margulis invariants satisfies conclusions main theorem except exhibit group also thus prove main theorem idea ensure margulis invariants elements group lie almost obviously maps every element opposite proposition makes impossible exclude case assumption representation action trivial precisely condition main theorem precisely set vectors satisfy say also satisfy see example smi examples representations satisfy condition thanks assumption choose nonzero vector fixed point possible since involution also choose vector collinear lemma take family satisfying hypotheses proposition also additional condition every maps generate free group acting properly discontinuously affine space vaff proof proof exactly proof lemma mutatis mutandis constant denoted earlier paper corresponds call respectively orthogonal projection parallel becomes orthogonal projection parallel may finally prove main theorem follow strategy proof main theorems smi additional tweaks proof main theorem first note assumption guarantees satisfies hypotheses main theorem two assumptions free assumption weaker condition assumption even weaker condition follows find positive constant family maps satisfy conditions whose linear parts generate subgroup apply lemma proceed several stages begin using result benoist apply lemma defined remark point iii remark assures indeed nonempty open convex cone gives family maps shall see elements identifying stabilizer every every asymptotically contracting along particular proposition two indices signs pair transverse iii single generates group generate together subgroup since case finite benoist item relevant comment item benoist theorem works elements forces every actually make use comment item since taken benoist whole group full flag variety benoist theorem actually gives stronger property pair element open weyl chamber transverse full flag variety need weaker version using remark condition may restated following way two indices signs pair parabolic spaces transverse clearly every pair transverse spaces finite finite number pairs hence choose suitable value fix rest proof hypothesis follows condition iii follows algebraic group containing power generator must actually contain generator allows replace every power without sacrificing condition clearly conditions iii preserved well choose large enough may suppose thanks proposition numbers small wish gives fact shall suppose every smain even smaller constant smain specified soon satisfy replace maps maps canonizing map need check break first three conditions indeed every even better since affine map construction canonical form geometry meaning agi hence still satisfy hypotheses contraction strength along agi smain similarly recall hence quantity depends fact equal norm matrix follows choose smain hypothesis satisfied conclude group generated elements acts properly discontinuously lemma free result nonabelian since linear part references abels properly discontinuous groups affine transformations survey geom dedicata ams abels margulis soifer auslander conjecture dimension less preprint abels margulis soifer zariski closure linear part properly discontinuous group affine transformations differential geometry abels margulis soifer linear part affine group acting properly discontinuously leaving quadratic form invariant geom dedicata auslander structure compact locally affine manifolds topology benoist actions propres sur les espaces annals mathematics benoist asymptotiques des groupes geom funct benoist quint random walks ductive groups ergebnisse appear available http borel tits groupes publications borel tits article groupes publications dgk danciger kassel proper affine action coxeter groups preparation eberlein geometry nonpositively curved manifolds university chicago press fried goldman affine crystallographic groups adv hall lie groups lie algebras representations elementary introduction springer international publishing second edition helgason geometric analysis symmetric spaces amer math second edition reat humphreys linear algebraic groups knapp lie groups beyond introduction margulis free properly discontinuous groups affine transformations dokl akad nauk sssr margulis complete affine locally flat manifolds free fundamental group soviet milnor fundamental groups complete affinely flat manifolds adv smi smilga proper affine actions representations submitted available smilga proper affine actions semisimple lie algebras annales institut fourier tits groupe sur corps quelconque journ reine angw | 4 |
artificial intelligence fabio massimo zanzotto university rome tor vergata oct abstract little little newspapers revealing bright future intelligence building intelligent machines help everywhere however bright future dark side dramatic job market contraction unpredictable transformation hence near future large numbers job seekers need support catching novel unpredictable jobs possible job market crisis antidote inside fact rise sustained biggest knowledge theft recent years learning machines extracting knowledge unaware skilled unskilled workers analyzing interactions passionately jobs workers digging graves paper propose intelligence fairer paradigm intelligence systems reward aware unaware knowledge producers scheme decisions systems generating revenues repay legitimate owners knowledge used taking decisions modern robin hoods researchers fairer intelligence gives back steals introduction edge wonderful revolution intelligence breathing life helpful machines relieve need perform repetitive activities cars taking steps urban environment younger brothers assisted driving cars already commercial reality robots vacuum cleaning mopping houses conquered new smartphones help everyday tasks managing agenda answering factoid questions learning companions medicine computers already help formulating diagnoses looking data doctors generally neglect intelligence preparing wonderful future people released burden repetitive jobs bright intelligence revolution dark side dramatic mass unemployment precede unpredictable job market transformation people see http hence governments frightened nearly every week newspapers world reporting possible futures around one actual jobs disappear alarming reports foresee one billion people unemployed worldwide releasing people repetitive jobs intelligent machines replace many workers chatbots slowly replacing call center agents trains already reducing number drivers trains cars replace cab drivers cities drones expanding automation managing delivery goods drastically reducing number delivery people examples even cognitive artistic jobs challenged intelligent machines may produce music jingles commercials write novels produce news articles intelligent risk predictors may replace doctors chatbots along massive open online courses may replace teachers professors coders risk replaced machines nobody job safe face overwhelming progress intelligence surprisingly rise intelligence supported unaware mass people risk seeing jobs replaced machines people giving away knowledge used train wonderful machines enormous legal knowledge theft taking place modern era along aware programmers intelligence researchers set learning modules intelligent machines unaware mass people providing precious training data passionately job simply performing activity net answering email interaction messaging service leaving opinion hotel simple everyday activities people data goldmine intelligence machines learning systems transform interactions knowledge intelligence machines knowledge theft completed normal everyday activity people digging grave jobs researchers intelligence tremendous responsibility building intelligent machines work rather intelligent machines steal knowledge jobs need ways support job seekers train catch novel unpredictable jobs need prepare antidote spread poison job market paper propose artificial intelligence novel paradigm responsible intelligence possible antidote poisoning job market idea simple giving right value knowledge producers umbrella researchers intelligence working underlying idea hence promotes interpretable learning machines therefore intelligence systems clear knowledge lifecycle systems clear whose knowledge used deployment situations way give rightful credit revenue original knowledge producers need fairer intelligence rest paper organized follows section describes enabling paradigms section sketches simple proposals better future section draws conclusions enabling paradigms transferring knowledge machines programming learning repeated experience since beginning digital era programming preferred way teach machines artificial programming languages developed clear tool tell machines according paradigm whoever wants teach machines solve new task useful master one programming languages people called programmers teaching machines decades made machines extremely useful nowadays think staying single day without using big network machines programmers contributed building tasks solved programming autonomous learning reinforced alternative way controlling behavior machines autonomous learning machines asked learn experience paradigm programming asked machines school machines learned walk trial error machines always good solving complex cognitive tasks poor working everyday simple problems paradigm autonomous learning introduced solve problem two paradigms paid transferring knowledge machines paid programming paradigm roles clear programmers teachers machines learners hence programmers could payed work autonomous learning paradigm activity programmers selection appropriate learning model examples show learning machines point view programming fair paradigm keeps humans loop although machines taught exactly hardly called artificial intelligence contrary autonomous learning unfair model transferring knowledge real knowledge extracted data produced unaware people hence little seems done humans machines seem whole job yet knowledge stolen without paying explainable artificial intelligence explainable machine learning explaining decisions learning machines hot topic nowadays dedicated workshops sessions major conferences areas application example medicine thrust intelligent machines blind decisions deep impact humans hence understanding decision taken become extremely important however exactly explainable machine learning model still open debate explainable machine learning play crucial role fact seen another perspective explaining machine learning decisions keep humans loop two ways giving last word humans explaining data sources responsible decision case decision power left hand specialized professionals use machines advisers clear case yet highly specialized knowledge workers area second case instead fairly important fact machines take decisions work task constantly using knowledge extracted data spotting data used decision action machine important order give credits produced data general data produced anyone everyone knowledge workers hence understanding machine takes decision may become way keep everybody loop intelligence convergence symbolic distributed knowledge representation explaining machine learning decisions simpler image analysis better cases system representation similar represented fact example neural networks interpreting images generally interpreted visualizing subparts represent salient subparts target images input images subparts tensors real numbers hence networks examined understood however large part knowledge expressed symbols natural languages combination symbols used convey knowledge fact natural languages sounds transformed letters ideograms symbols composed produce words words form sentences sentences form texts discourses dialogs ultimately convey knowledge emotions composition symbols words words sentences follow rules hearer speaker know hence symbolic representations give clear tool understand whose knowledge used machines current intelligence systems symbols fading away erased tensors distributed representations distributed representations pushing deep learning models towards amazing results many tasks image recognition image generation image captioning machine translation syntactic parsing even game playing human level strict link distributed representations symbols approximation second representation input output networks internal representation strict link tremendous opportunity track symbolic knowledge knowledge lifecycle way symbolic knowledge producers rewarded unaware work simple proposal better future peasant late century would never imagined years yoga trainer pet caretaker ayurveda massage therapist cite technology unrelated jobs common jobs also extremely likely wise politician period lack imagination even though time spend imagine future less pressure job loss today situation similar end century complication speed revolution peasants politicians hardly imagine next job market see trends hard exactly imagine skills needed part labor force future yet revolution overwhelming risks elimination many jobs near future may happen society envisage clear path relocating workers urge strategy immediate intelligence revolution based enormous knowledge theft skilled unskilled workers everyday jobs leave important traces traces training examples machines use learn hence intelligence using machine learning stealing workers knowledge learning interactions unaware workers basically digging graves jobs knowledge produced workers used machines going produce revenues machine owners years major problem since small fraction population revenue source real owners knowledge participating redistribution wealth model propose intelligence seeks give back part revenues unaware knowledge producers key idea interaction machine constantly repay whoever produced original knowledge used interaction obtain repayment need work major issue determine clear knowledge lifecycle performs compete tracking knowledge initial production decision processes machine hence need promote intelligence models explainable track back initial training examples originated decision way clear decision made rewarded fraction decision producing managing ownership knowledge poses big technological moral issues certainly complex simply using knowledge forgetting source interaction tracked assigned individual hence issues clear people web mandatory second privacy become overwhelming legal issue finally pursue ecosystem fair intelligence solutions need invest following enabling technologies legal aspects explainable artificial intelligence must order reword knowledge producers systems need exactly know responsible decision symbiotic symbolic distributed knowledge representation models needed large part knowledge expressed symbols trusted technologies knowledge clear correctly tracked virtual identity protocols mechanisms systems need exactly reworded privacy preserving protocols mechanisms although systems need know rewarded privacy preserved studying extensions copyright unaware knowledge production legal solution safeguard unaware knowledge producers conclusions job market contraction dark side shining future promised intelligence systems unaware skilled unskilled knowledge workers digging graves jobs passionately normal everyday work learning extracting knowledge interactions gigantic knowledge theft modern era paper proposed intelligence fairer approach modern robin hoods researchers fairer intelligence gives back steals skilled unskilled workers producing knowledge intelligence making need give back large part legitimate owners references david aha trevor darrell michael pazzani darryn reid claude sammut peter stone proceedings ijcai workshop explainable intelligence xai august peter austin jack jennifer daniel levy douglas lee using methods literature disease prediction case study examining heart failure subtypes journal clinical epidemiology dzmitry bahdanau kyunghyun cho yoshua bengio neural machine translation jointly learning align translate arxiv preprint roberta beccaceci francesca fallucchi cristina giannone francesca spagnoulo fabio massimo zanzotto education living artworks museums csedu pages briot hadjeres pachet deep learning techniques music survey arxiv preprint naom chomsky aspect syntax theory mit press cambridge massachussetts michael chui james manyika mehdi miremadi machines could replace yet mckinsey quarterly july lorenzo ferrone fabio massimo zanzotto towards compositional distributional semantic models proceedings coling international conference computational linguistics technical papers pages dublin ireland august dublin city university association computational linguistics lorenzo ferrone fabio massimo zanzotto xavier carreras decoding distributed tree structures statistical language speech processing third international conference slsp budapest hungary november proceedings pages patrizia ferroni fabio massimo zanzotto noemi scarpato silvia riondino umberto nanni mario roselli fiorella guadagni risk assessment venous thromboembolism ambulatory cancer patients machine learning approach medical decision making chelsea gohd next teacher could robot february ian goodfellow jean mehdi mirza bing david sherjil ozair aaron courville yoshua bengio generative adversarial nets advances neural information processing systems pages andrew gray mohammad ali yiqi gao hedrick francesco borrelli semiautonomous vehicle control road departure obstacle avoidance ifac control transportation systems pages kaiming xiangyu zhang shaoqing ren jian sun identity mappings deep residual networks arxiv preprint eric jonathon miner robotic sweeper cleaner dusting pad march patent alice kerly phil hall susan bull bringing chatbots education towards natural language negotiation open learner models systems kim malioutov varshney weller proceedings icml workshop human interpretability machine learning whi arxiv august konstantina kourou themis exarchos konstantinos exarchos michalis karamouzis dimitrios fotiadis machine learning applications cancer prognosis prediction computational structural biotechnology journal yann lecun yoshua bengio hinton deep learning nature hod lipson melba kurman driverless intelligent cars road ahead mit press zachary chase lipton mythos model interpretability corr todd litman autonomous vehicle implementation predictions victoria transport policy institute jerome lutin alain kornhauser eva masce revolutionary development vehicles implications transportation engineering profession institute transportation engineers ite journal volodymyr mnih koray kavukcuoglu david silver alex graves ioannis antonoglou daan wierstra martin riedmiller playing atari deep reinforcement learning arxiv preprint megan murphy ginni rometty end programming bloomberg businessweek september plate distributed representations nested compositional structure phd thesis plate holographic reduced representations ieee transactions neural networks revathi sarma dhulipala smart parking systems sensors survey computing communication applications iccca international conference pages ieee schmidhuber deep learning neural networks overview neural networks david silver aja huang chris maddison arthur guez laurent sifre george van den driessche julian schrittwieser ioannis antonoglou veda panneershelvam marc lanctot mastering game deep neural networks tree search nature karen simonyan andrew zisserman deep convolutional networks image recognition arxiv preprint charles taylor andrew parker shek lau eric blair andrew heninger eric enrico dibernardo robert witman michael stout robot vacuum cleaner june patent app miroslav trajkovic antonio colmenarez srinivas gutta karen trovato computer vision based parking assistant january patent iwan ulrich francesco mondada nicoud autonomous vacuum cleaner robotics autonomous systems oriol vinyals ukasz kaiser terry koo slav petrov ilya sutskever hinton grammar foreign language cortes lawrence lee sugiyama garnett editors advances neural information processing systems pages curran associates oriol vinyals alexander toshev samy bengio dumitru erhan show tell neural image caption generator proceedings ieee conference computer vision pattern recognition pages richard wallace anatomy pages springer netherlands dordrecht david weiss chris alberti michael collins slav petrov structured training neural network parsing arxiv preprint joseph weizenbaum computer program study natural language communication man machine communications acm kelvin jimmy ryan kiros kyunghyun cho aaron courville ruslan salakhutdinov richard zemel yoshua bengio show attend tell neural image caption generation visual attention arxiv preprint zou richard socher daniel cer christopher manning bilingual word embeddings machine translation emnlp pages | 2 |
test ideals rings finitely generated algebras mar alberto chiecchio florian enescu lance edward miller karl schwede abstract many results known test ideals rings paper generalize many results case symbolic rees algebra finitely generated generally log setting particular show numbers discrete rational show test ideals described alterations hence show splinters strongly setting recovering result singh demonstrate multiplier ideals reduce test ideals reduction modulo symbolic rees algebra finitely generated prove type stabilization still holds also show test ideals satisfy global generation properties setting introduction test ideals introduced hochster huneke theory tight closure within positive characteristic commutative algebra discovered test ideals closely related multiplier ideals theory test ideals pairs developed analogous theory multiplier ideals however unlike multiplier ideals test ideals initially defined even without hypothesis see similar theory multiplier ideals hypothesis useful test ideals indeed number central open questions still unknown without goal paper generalize results hypothesis setting local section ring also known symbolic rees algebra finitely generated notably perhaps important open problem within tight closure theory question whether weak strong equivalent generally whether splinters strong equivalent characteristic zero perspective splinters weak strong competing notions singularities analogous klt singularities known coincide case known equivalent hypothesis conditions previously singh announced proof splinters finitely generated strongly recover new proof result fact show something stronger prove big test ideal equal image construction involving alterations theorem theorem corollary suppose normal integral scheme effective finitely generated exists alteration normal factoring mathematics subject classification key words phrases anticanonical test ideal multiplier ideal fourth named author supported part nsf frg grant dms nsf career grant dms sloan fellowship chiecchio enescu miller schwede proj image finite type perfect field one may take regular alternately one may take finite map case almost certainly regular consequence obtain image runs alterations factoring alternately one run finite maps additionally finite type perfect field one run regular alterations factoring actually prove stronger theorem allowing triples include keep statement simpler note characteristic zero intersection regular alterations characterized multiplier ideals least observing remark inspired analog multiplier ideals lot interest showing jumping numbers test ideals rational without limit points point know numbers discrete rational scheme qcartier also know discreteness finitely generated spec spectrum graded ring hand know jumping numbers discrete rational finitely generated see remark prove following theorem theorem proposition suppose pair finitely generated ideal numbers rational without limit points prove discreteness result two ways first pass local section ring symbolic rees algebra pullback show test ideal symbolic rees algebra restricts test ideal original scheme alternately section prove discreteness result projective varieties utilizing theory developed first author urbinati particular show global generation result test ideals theorem immediately implies test ideal result another setting hypothesis used study ideals recall normal index divisible follows images map homr stabilize sufficiently large stable image gives canonical scheme structure locus variety generalize case finitely generated includes case index divisible theorem corollary theorem theorem suppose normal domain divisible algebra finitely generated image evaluation map homr stabilizes sufficiently divisible test ideals algebras give several different proofs fact utilizing different strategies finally also show theorem theorem suppose normal variety algebraically closed field characteristic zero suppose finitely generated also suppose ideal rational number atp compared analogous result shown hypothesis numerically numerically condition somewhat orthogonal finite generation particular finitely generated numerically difficult see see also remark previous version paper included incorrect statement lemma version also corrects published version fixes statement making weaker fortunately needed weaker statement applications acknowledgements authors would like thank tommaso fernex christopher hacon nobuo hara mircea anurag singh several useful discussions would also like thank juan felipe several useful comments previous draft paper finally would like thank referee numerous valuable comments pointing mistake lemma previously lemma removed preliminaries section recall basic properties need test ideals local section well theory positivity divisors developed urbinatichiecchio conclude stating finite generation result local section rings threefolds positive characteristic consequence recent breakthroughs mmp setting throughout paper rings assumed noetherian equal characteristic implies excellent dualizing complexes schemes assumed noetherian separated dualizing complexes variety separated integral scheme finite type field scheme use denote absolute frobenius morphism also make following universal assumption holds schemes essentially finite type field even essentially finite type local ring frequently also consider divisors schemes whenever talk divisors make universal assumption normal integral particular whenever consider pair implicitly assumed normal make one remark nonstandard notation use normal domain weil divisor spec use denote fractional ideal called divisorial symbolic rees algebras commutative algebra literature chiecchio enescu miller schwede test ideals recall definitions basic properties test ideals test ideals introduced technically talking test ideal particular definition test ideal presented found definition among places definition test ideals suppose normal domain ideal sheaf real number test ideal unique smallest nonzero ideal every every homr homr leave writing write obvious test ideal exists however pit shown exists varies homr see lemma element called big element immediately obtain following construction test ideal lemma notation definition big element ranges elements also range elements homr one may homr alternately one may replace finally also sufficiently large cartier divisor tre proof first statement easy see contained ideal satisfying condition homr hence sum thus sum smallest ideal second statement replacing obviously containment notice test element hence one form original sum inclusion follows statement notice sum still final characterization test ideal follows immediately fact divx omox notice difference coming fact round instead round absorbed difference divx also recall properties test ideal later use lemma suppose definition formation commutes localization one define schemes well test ideals algebras exists corresponding cartier divisor proof part follows immediately lemma part follows similarly use projection formula part obvious also lemma part exercise let quickly sketch proof since know reference addressed full generality choose test element easy choose works write cdr element sum finite sum say let cdr runs homr see containment handled finally make one definition related test ideals definition triple definition called strongly briefly also recall formalities maps connections divisors lemma suppose spec normal scheme bijection effective divisors elements homr modulo units use following notation correspondence homr corresponds map corresponds divisor div homr using bijection effective elements homr homr modulo multiplication units bijection divisors course proof theorem straightforward exercise see exercise theorem also difficult check see instance definition chiecchio enescu miller schwede local section rings divisors symbolic rees algebras suppose normal integral noetherian scheme one form additionally integer use denote nth veronese subalgebra note canonical map dually map spec schemes note may noetherian equivalently spec noetherian also proj maps well behaved outside codimension recall map proj called small projective morphism strict transform moreover map exists noetherian lem finitely generated spec proj normal see instance also see lemma suppose finitely generated closed subset codimension also codimension spec proj respectively additionally isomorphism outside closed codimension subset integral outside set codimension consequence canonical pullbacks proof since symbolic rees algebra module rank map proj small lem case verified locally begin assumption integral let spec suppose codimension component whose support defined prime height ideal lemma height zero impossible since defines subset set codimension case integral observe result holds veronese algebra sufficiently divisible finite algebra see proof lemma result follows fact bundle least outside set codimension follows immediately fact case integral cartier divisor section ring looks locally like outside set codimension remark separated pullback coincides pullback fernex hacon see remark interested proving various section rings finitely generated recall lemma notation defined start section finitely generated finitely generated equivalently suppose finite dominant map another normal integral noetherian scheme let finitely generated finitely generated proof part exactly lemma although also found numerous sources know good reference sketch proof may assume integral also harmless assume spec test ideals algebras affine hence spec pass category commutative rings actually rings sheaves rings particular suppress notation might otherwise need diagram first choose single element identify follows finite hence integral finite extension well let integral closure inside want show complete proof recall already assumed integral let closed set codimension outside cartier consider functor applied rings sheaves rings involved direct sum reflexive global sections hartog lemma reflexive sheaves thus identified since affine hand codimension subset spec outside obviously agree hence spec normal also spec shown desired also need understand canonical divisors spec proj lemma continuing notation start section assuming finitely generated kproj additionally weil divisor locally base kspec particular kspec kspec thus proof recall outside set codimension hence makes sense computation kspec found theorem initial statement kproj obvious since small positivity divisors section recall definitions results let recall morphism schemes coherent sheaf relatively globally generated generated natural map surjective normal scheme weil divisor might example generated account pathologies work asymptotically say relatively asymptotically globally generated generated positive sufficiently divisible let projective morphism normal noetherian schemes divisor every spec simply say nef def every ample exists algebra local sections finitely generated spec say ample def notice notions coincide usual ones nefness amplitude remark amplitude weil divisors called chiecchio enescu miller schwede given two conditions positivity one based fact regular ample cone interior nef cone technical one finite generation algebra local sections two conditions independent particular examples weil divisors satisfying positivity condition algebra local sections finitely generated example notions positivity behave much like world example ample weil globally generated divisor ample lem lemma let normal noetherian projective scheme field algebra local section finitely generated exists cartier divisor ample weil divisor proof notice ample ample lem without loss generality assume generated degree integral let ample cartier divisor definition exists globally generated surjection last equality consequence assumption finite generation algebra local sections thus globally generated asymptotically globally generated moreover since cartier also finitely generated lem ample main characterization positivity terms let normal projective noetherian scheme algebraically closed field let finitely generated let proj notice theorems using characterization urbinati first author proved fujita vanishing locally free sheaves cor let normal projective noetherian scheme algebraically closed field let ample let locally free coherent sheaf exists integer positive divisible nef cartier divisors pullback weil divisors let proper birational morphism normal noetherian separated schemes fernex hacon introduced way pulling back weil divisor via weil divisor along denoted weil divisor def negative sign appearing effective pulling back ideal defining subscheme pullback along lim inf lim test ideals algebras infimum limit limit lem def moreover definition coincides usual one whenever prop remark small projective birational morphism notion pullback quite functorial unfortunately let two birational morphisms normal noetherian separated schemes weil divisor divisor effective moreover invertible sheaf lem lemma let normal noetherian scheme let weil divisor finitely generated let proj let positive sufficiently divisible particular reflexive sufficiently divisible proof since see lemma generated positive sufficiently divisible natural map surjective positive sufficiently divisible since small integers proof see lemma thus positive sufficiently divisible surjection notice isomorphic quotient torsion caution since torsion free surjection induces surjection hand since small integers since natural inclusion lemma let normal noetherian separated scheme let weil divisor finitely generated let proj let birational morphism factoring proj normal noetherian separated scheme positive sufficiently divisible therefore proof application lemma explain consider following chain equalities first last equalities definition third lemma since sufficiently divisible proves first statement final statement consequence fact sufficiently divisible finite generation hypothesis chiecchio enescu miller schwede lemma suppose composition birational morphisms normal varieties weil proof check identity suffices show orde orde prime divisor generic point prime divisor gives rise dvr sufficiently divisible positive integer set power agreeing definition orde lim similar calculation computes finally birational keeps coefficients divisors contracted thus orde orde prime divisor desired define multiplier ideals way slight generalization one definition let normal variety algebraically closed aover field characteristic zero divisor formal product fractional ideal sheaves collection data called triple denoted say triple effective jkak ideals remark notice assume remark triple effective effective pair sense definition definition let effective triple let positive integer let log resolution pair definition theorem let jkak formal product define sheaf remark reason new notation notation slightly general one particular fernex hacon include boundary divisor term might cause confusion since reader might think one could absorb divisor ideal indeed divisor formal combination height ideals unfortunately yield object particular yield usual multiplier ideal even difference asymptotics already built whereas asymptotics built particular let denote formal product ideals corresponding obvious way general mkx mkx mkx first containment lemma second consequence remark test ideals algebras lemma let effective triple sheaf coherent sheaf ideals definition independent choice proof proof proceeds proof lemma let effective triple set ideal sheaves unique maximal element proof positive integers jmq remark previous lemma two ideals computed common resolution unique maximal ideal exists noetherianity definition let effective triple call unique maximal element multiplier ideal triple denote remark case write definition agrees one corollary working characteristic zero suppose finitely generated proj let ideal sheaf proof let projx enough show every satisfying result lemma let log resolution factoring let jkak let since log resolution let since log pair multiplier ideal tak hand multiplier ideal tak satisfying lemma therefore satisfying tak tak tak remark assumptions corollary follows immediately jumping numbers rational without limit points recall chiecchio enescu miller schwede jumping numbers real numbers also follows try image runs alterations factoring aoy invertible finite generation local section rings threefolds characteristic course one might ask often even happens section ring finitely generated rational surface singularities characteristic known always locally torsion divisor class group see theorem obviously finitely generated however threefolds rational singularities enough example cutkosky even additionally log canonical course characteristic zero finite generation section rings holds klt dimension minimal model program theorem course closely linked existence flips using recent breakthroughs minimal model program threefolds characteristic one prove finite generation important cases dimension characteristic proof essentially characteristic zero see exercises reproduce reader convenience theorem let klt pair dimension algebraically closed field char algebra finitely generated proof let small exists theorem strict transforms notice set klt theorem since big see oxb kxb finitely generated since small implies finitely generated well since algebras however taking high veronese recalling locally contributes nothing finite generation conclude finitely generated desired course also implies strongly pairs finitely generated local section algebras since always klt appropriate boundary stabilization discreteness rationality via rees algebras section aim prove discreteness rationality jumping numbers test ideals well stabilization results hypothesis algebra finitely generated first notice extend maps maps note argument substantially simpler fourth author tucker obtain similar results finite maps test ideals algebras lemma suppose normal domain weil divisor spec associated algebra map induced map commutative diagram projection map onto degree zero proof first note give structure induced map homogeneous idea simple given integer ipe want show ipe obvious since holds codimension sheaves reflexive finally simply send zero divisible completes proof fact difficult see every homogeneous map comes way lemma suppose normal domain weil divisor spec suppose homogeneous map give induced lemma proof choose invert element make cartier principal generates element degree zero see ump hence ump point choose regardless choice hence completely determined lemma key following proposition lets relate maps general proposition suppose normal domain weil divisor algebra finitely generated particular noetherian ring suppose effective weil divisor spec pullback spec commutative diagram homs homs homr homr chiecchio enescu miller schwede map projection onto coordinate map restricts homs projects onto furthermore maps surjective proof first handle commutativity given homs see hand well hence commutativity right square commutativity left square obvious since pulled back spec see surjective homr construct lemma obviously similarly lemma implies surjectivity map immediate corollary obtain stabilization result similar corollary suppose normal domain weil divisor algebra finitely generated image map homr stabilizes proof set consider diagram proposition since cartier see images eval ese homs stabilize see instance since proposition surjects see image eval homs homr image coincides homr image homs however ese stable image already observed result follows later theorem obtain result whose divisible though move discreteness rationality numbers generalizing theorem case graded ring theorem suppose normal domain effective finitely generated ideal numbers rational without limit points proof first let separable extension normal domains corresponding spec integral divisor map schemes spec easy idea simply take roots generators dvrs one take pth root use type equations see lemma let trace map recall main result immediately follows numbers test ideal discrete rational numbers additionally adding cartier divisor assume effective since lemma finally note test ideals algebras finitely generated lemma upshot entire paragraph course may without loss generality assume integral effective divisor next choose test element choice easy simply choose test element additionally cartier away looks locally like certainly strongly wherever strongly let cartier divisor corresponding consider commutative diagram homs eval homs homr homr eval sum images bottom rows equal sum images top row equal since surjects proposition immediately see observe lemma numbers discrete rational result follows immediately obtain following using aforementioned breakthroughs mmp corollary suppose strongly dimension finite type algebraically closed field characteristic numbers rational without limit points choice ideal proof since strongly exists divisor qcartier klt result follows theorem theorem course also obtain discreteness rationality numbers ring finite type algebraically closed field characteristic exists klt general type result corollary used compatibility formation rees algebras prove images homr stabilize large finitely generated short section generalize result case least whose divisible alternate strategy one could try prove compatibilities analogous proposition rees algebras unfortunately gets quite messy instead take different approach utilizing proj first prove result varieties handle finitely generated case via small map restrict case divisible realize methods discuss apply general situations several potential competing definitions stable image proposition suppose pair suppose weil index divisible integral weil divisor image homr pne stabilizes large chiecchio enescu miller schwede proof fix cartier divisor main idea module homr pne takes values finitely many sheaves least twisting line bundles particular multiples also take advantage fact sufficient show images stabilize partially chain claim fix consider homr pne factors homr hence sufficient show images homr pne homr stabilize proof claim one simply notices pne hence pne contains ization occurs simply restriction scalars thus claimed continue main proof note mod eventually periodic choose linear function mod constant set pre mod mod note homr mod inverting element necessary may assume thus utilizing maps maps frobenius pushforwards least unit apply standard theorem may checked conclude images stabilize codimension since sheaves reflexive maps determined codimension however localizing reduce codimension complicated twisting done irrelevant furthermore codimension gorenstein index divisible since weil index test ideals algebras divisible chain maps turns homr homr bottom horizontal map obtained via note inclusion identified multiplication defining equation pce independent maps chain really pushforward claimed note completes proof even though proved stabilization images subset images descending subset infinite position prove corollary general situation theorem suppose normal domain divisible algebra finitely generated image evaluation map homr stabilizes sufficiently divisible proof proof phrase maps terms trace thus fix integral weil divisor let proj spec observe also still divisible lemma hence images pne stabilize proposition fact argument even shows images even stabilize finite stage pne however terms maps chain take finitely many values twisting large cartier multiples argued proposition goal thus show images stabilize pushing forward chiecchio enescu miller schwede claim one applies obtaining chain images still stabilizes proof claim choose image equal stable image denote note proposition finitely many conditions observe finitely many twisting large multiples fact ample implies exists pne image image pne image map factors image assumption stable image applied choice follows composition surjects surjects every every note depend thus since image see pne image pne image clearly proves desired stabilization return proof theorem trivial observe pne pne since small hence proof complete stabilization discreteness via positivity previous section showed discreteness rationality numbers via passing local section algebra symbolic rees algebra already knew discreteness rationality section recover discreteness result projective setting using methods allow apply asymptotic vanishing theorems weil divisors indeed first prove global generation results test ideals employing similar methods setting let normal projective variety characteristic effective weil ideal sheaf make assumptions test ideals algebras assume line bundle global sections globally generate let symc denote cth symmetric power vector space observe symc globally generates thus surjection sheaves symc lemma divisible dividing cartier divisor finite set integers pei integral pei equals image symt pei trei pei level result obvious technicalities involve showing various rounding choices make give result end since absorb differences test element local generator include complete proof invite reader skip already familiar type argument proof statement end local trivializing suffices show trei pei pick effective cartier divisor corresponding vanishing locus test element integer tre cartier prop equality also holds one always pick cartier one obtains inclusions tre tre tre next consider claim allow restrict multiples claim weil divisor exists cartier divisor integer integer chiecchio enescu miller schwede proof prove claim first note lemma among many places alp upper bound number generators note works set div notice trb trb applying proves claim return proof lemma claim previous work implies sufficiently large cartier divisor depending tre therefore tre pick cartier divisor tre tre tre tre tre tre particular tre since divisible integral divisor sufficiently divisible hence choosing sufficiently divisible noting scheme noetherian sum finite obtain desired result test ideals algebras remark certainly possible generalize handle handle integral generalizations ones need particular need power times locally free sheaf theorem suppose normal projective finitely generated divisible fix exists cartier globally generated dividing proof choose line bundle globally generated sections lemma cartier divisor integers test ideal equal image symc pei globally generated summand fix globally generated ample cartier divisor claim suffices find cartier divisor pei pei pei globally generated equality displayed equation follows projection formula indeed assuming global generation choose dim note image globally generated sheaf still globally generated find single works since finitely generated use lemma find cartier divisor ample weil divisor moreover find ample cartier divisor ample notice ample observation lets replace set lem ample weil divisor fix show regularity respect zero guarantees mumford theorem thm desired global generation suffices show projection formula fact change underlying sheaves abelian groups showing dim since may assume nef therefore ample weil nef cartier may apply version fujita vanishing thm obtain vanishing desired completes proof remark discussion choose effectively chiecchio enescu miller schwede remark indeed hard choose effectively summarizing proof fix ample cartier globally generated fix globally generated ample cartier divisor fix cartier ample choose ample cartier ample take turn promised results discreteness rationality proposition suppose normal projective finitely generated ideal sheaf jumping numbers without limit points proof first assume divisible follows appropriately generalized version argument lemma hence every real number rational number dividing fix follows theorem exists cartier divisor globally generated every divide previous discussion also see globally generated every discreteness follows since form decreasing chain subspaces finite dimensional vector space course global generation hypothesis proves result divisible next assume divisible fix map inducing map fraction fields theorem map induces possibly weil divisor choose cartier divisor effective notice also divisible next observe hence finitely generated note cartier thus harmless really taking veronese hence already shown numbers limit points therefore applying via see numbers also limit points numbers limit points proving theorem global generation stabilization give another proof corollary projective setting theorem suppose projective pair finitely generated divisible images image stabilize sufficiently large divisible use denote stable image test ideals algebras proof choose globally generated ample cartier divisor cartier divisor ample weil divisor integral set image fixing dim image immediately notice respect hence image globally generated global generating sections lie finite dimensional form descending chain ideals increases see stabilizes sufficiently large divisible claimed immediate corollary proof obtain corollary suppose projective pair dimension finitely generated divisible cartier divisor ample weil divisor globally generated ample cartier divisor globally generated alterations section give description test ideal assumption finitely generated generalizes case consequence obtain generalization result singh also compare starting let fix notation recall following section setting suppose normal scheme finitely generated proj suppose ideal sheaf real number already seen pullback becomes divisor see lemma suppose alteration factors define equivalently define even though birational see section recall course small alteration meaning locus codimension coincides obvious pullback operation generally factors alteration birational define next lemma later section use notion notation parameter test modules concise introduction relation test ideals please see section result announced years ago distributed chiecchio enescu miller schwede lemma working setting integral veronese symbolic rees algebra generated degree atm atm proof know lemma sufficiently large cartier trex choose divisors cartier cartier since trx trx see trex already close claim choose cartier atm proof claim checking assertion easy certainly knock atm multiplication cartier divisor handling multiplication little tricke ier likewise certainly multiply notice finite generation hypothesis proves claim returning proof see atm atm atm atm proves lemma test ideals algebras remark tempting try use lemma give another proof discreteness rationality numbers appealing however seem work particular authors prove discreteness rationality jumping numbers mixed test ideals handled one could probably easily recover discreteness numbers via usual arguments gauge boundedness cartier algebras least case finite type field additional reading mixed test ideals pathologies invite reader look really convenient thing lemma purposes following lemma using notation lemma suppose alteration normal write invertible sheaf defined text setting proof easy indeed already know factors normalized blowup universal property blowups hand result immediately obtain following theorem suppose normal integral scheme effective finitely generated suppose also ideal sheaf rational number exists alteration normal factoring proj divy image may taken independently desired locally principal instance one may take small alteration desired alternately essentially finite type perfect field one may take regular consequence obtain image runs alterations invertible regular alterations finite type perfect field proof result follows immediately theorem combined lemma lemma indeed simply choose integer condition lemma satisfied apply theorem find alteration image map consider alterations occur intersection following theorem might seem require consider factor normalized blowup divisible however easy see dominated factor blowup blowups certainly smaller images one also must handle case varying quite done theorem generality authors treated need however argument essentially goes verbatim alternately argument remaining part statement follow immediately assertion case locally principal however proof theorem chiecchio enescu miller schwede alteration needed always taken finite cover normalized blowup ideal case normalized blowup atm coincides normalized blowup normalized blowup course setting proof constructed definitely finite different simplest constructed definitely finite fortunately reduce case finite least corollary suppose normal integral scheme effective finitely generated exists finite map normal factoring proj image proof let small alteration satisfying theorem ally assume integral simplicity notation next let stein factorization since small see result follows question one limit oneself separable alterations theorem particular always separable alteration image analogous result known however proof definitely separable rely theorem uses frobenius induce certain vanishing results possible could replaced cohomology killing arguments instance special case recover result anurag singh announced years ago corollary singh suppose splinter finitely generated strongly proof indeed splinter finite morphism map omox surjects however omox trace map identified map hence using corollary see since always denotes big test ideal proves strongly would natural try use show splinters strongly varieties characteristic using fact klt varieties satisfy finite generation anticanonical rings theorem gap following question suppose normal domain also splinter exist spec spec klt analogous result existence strongly varieties shown course fact splinters fact derived splinters characteristic would likely useful particular obtain following test ideals algebras corollary suppose three dimensional splinter also klt appropriate finite type algebraically closed field characteristic strongly reduction characteristic zero goal section show multiplier ideals reduce test ideals atp reduction characteristic least finitely generated begin preliminaries reduction process let scheme finite type algebraically closed field characteristic zero ideal sheaf one may choose subring finitely generated defined denote oxa models closed point spec denote corresponding reductions oxs defined residue field necessarily finite simple case spec prime scheme spec mod mod warning follows abuse terminology following way actually mean set closed points open dense set spec furthermore start actually mean closed point aforementioned common abuse notation expect cause confusion substantially shorten statements theorems lemma suppose normal variety algebraically closed field characteristic zero finitely generated particular finitely generated setting proj ring also finitely generated means denote proof note makes sense finitely generated naturally finitely generated reduction generators problem potentially algebra may symbolic rees algebra local section algebra oxp throughout proof constantly need choose technically restrict smaller open subset spec first record claim certainly well known experts claim weil divisor prime potentially depending claim oxp sheaves oxp proof claim see prove reflexive agrees outside codimension subset oxp course since reflexive omx omx isomorphism isomorphism certainly preserved via reduction characteristic reflexive least choose closed set codimension defined additional coefficients ones already needed define cartier note cartier oxp locally free agrees claim follows chiecchio enescu miller schwede return proof lemma next define blowup sufficiently divisible since small note still blowup oxp claim since cartier characteristic zero cartier reduction characteristic well still small notice relatively ample since obtained blowing oxp hence oxp finitely generated proj equal lemma follows immediately armed lemma proof main theorem section easy theorem suppose normal variety algebraically closed field characteristic zero suppose finitely generated also suppose ideal rational number atp sufficiently proof know oxe divisible sufficiently large log resolution singularities definition need oxe oxe oxe oxe invertible rewrite multiplier ideal oxe oxe observe equal note since independent choice least finitely generated choice sufficiently divisible since cartier know oxp kxp lemma shows oxp kxp atp combining equalities proves result remark theorem also implies strongly finitely generated klt nobuo hara gave talk result conference honor mel hochster birthday result never published corollary suppose variety algebraically closed field characteristic zero klt sense ideal sheaf rational atp proof follows minimal model program particular theorem finitely generated result follows immediately theorem test ideals algebras references aberbach maccrimmon results test elements proc edinburgh math soc bhatt derived splinters positive characteristic compos math birkar existence flips minimal models char appear annales scientifiques ens birkar cascini hacon mckernan existence minimal models varieties log general type amer math soc birkar waldron existence mori fibre spaces char blickle test ideals via algebras maps algebraic geom blickle smith discreteness rationality michigan math special volume honor melvin hochster blickle smith hypersurfaces trans amer math soc blickle schwede maps algebra geometry commutative algebra springer new york blickle schwede takagi zhang discreteness rationality jumping numbers singular varieties math ann blickle schwede tucker via alterations amer math boucksom fernex favre urbinati valuation spaces multiplier ideals singular varieties cascini tanaka base point freeness positive characteristic chiecchio minimal model program without flips chiecchio urbinati ample weil divisors algebra cutkosky weil divisors symbolic algebras duke math fernex docampo takagi tucker comparing multiplier ideals test ideals numerically varieties bull lond math soc fernex hacon singularities normal varieties compos math jong smoothness alterations inst hautes sci publ math demazure anneaux normaux introduction des travaux cours vol hermann paris ein lazarsfeld smith varolin jumping coefficients multiplier ideals duke math fujino schwede takagi supplements ideal sheaves higher dimensional algebraic geometry rims bessatsu res inst math sci rims kyoto gabber notes geometric aspects dwork theory vol walter gruyter gmbh berlin goto herrmann nishida villamayor structure noetherian symbolic rees algebras manuscripta math hacon three dimensional minimal model program positive characteristic amer math soc hara geometric interpretation tight closure test ideals trans amer math soc electronic hara yoshida generalization tight closure multiplier ideals trans amer math soc electronic chiecchio enescu miller schwede hartshorne algebraic geometry new york graduate texts mathematics hartshorne generalized divisors gorenstein schemes proceedings conference algebraic geometry ring theory honor michael artin part iii antwerp vol hartshorne speiser local cohomological dimension characteristic ann math hochster foundations tight closure theory lecture notes course taught university michigan fall hochster huneke tight closure invariant theory theorem amer math soc hochster huneke infinite integral extensions big algebras ann math hochster huneke test elements smooth base change trans amer math soc huneke lyubeznik absolute integral closure positive characteristic adv math katzman lyubeznik zhang discreteness rationality jumping coefficients algebra katzman schwede singh zhang rings frobenius operators math proc cambridge philos soc exercises birational geometry algebraic varieties mori birational geometry algebraic varieties cambridge tracts mathematics vol cambridge university press cambridge collaboration clemens corti translated japanese original kunz noetherian rings characteristic amer math lazarsfeld positivity algebraic geometry ergebnisse der mathematik und ihrer grenzgebiete folge series modern surveys mathematics results mathematics related areas series series modern surveys mathematics vol springerverlag berlin classical setting line bundles linear series lipman rational singularities applications algebraic surfaces unique factorization inst hautes sci publ math lyubeznik applications local cohomology characteristic reine angew math lyubeznik smith strong weak equivalent graded rings amer math lyubeznik smith commutation test ideal localization completion trans amer math soc electronic locus positive characteristic celebration algebraic geometry clay math vol amer math providence constancy regions mixed test ideals algebra schwede algebra number theory schwede test ideals rings trans amer math soc schwede smith globally log fano varieties adv math schwede tucker behavior test ideals finite morphisms algebraic geom schwede tucker test ideals ideals computations jumping numbers alterations division theorems math pures appl schwede tucker zhang test ideals via single alteration discreteness rationality numbers math res lett test ideals algebras singh splinter rings characteristic math proc cambridge philos soc singh private communication smith multiplier ideal universal test ideal comm algebra special issue honor robin hartshorne takagi interpretation multiplier ideals via tight closure algebraic geom watanabe remarks concerning demazure construction normal graded rings nagoya math theorem positive characteristic inst math jussieu tasis dorado address department mathematics statistics georgia state university atlanta usa address fenescu department mathematical sciences university arkansas fayetteville address department mathematics university utah room salt lake city address schwede | 0 |
feb character degrees avinoam mann shown set powers prime includes occur set character degrees becomes interest see extent remains true consider particular class isaacs construction yields groups nilpotency class consider extreme recall group order said maximal class nilpotency class see lgm well developed theory groups group factor group order therefore irreducible characters degree suggested last section restrictions possible character degrees set group maximal class present note verifies conjecture indeed weaker assumptions maximal class results suffices assume derived subgroup index last assumption quite consequences structure given group secondary aim note derive see theorem propositions results going applied character degrees results theorem let irreducible character degree character degree remark satisfying maximal class character degrees odd primes however exist groups satisfying maximal class irreducible characters degrees constructed primes groups maximal class whose character degrees showing bound theorem best possible also easy see exist type theorem let maximal class irreducible characters degree higher characters degree theorem let satisfying equivalently irreducible characters degree proofs first quote theory groups maximal class proofs found lgm let maximal class class say write terms lower central series notations applied group maximal class encounter group denoted let denote corresponding subgroups typeset avinoam mann etc returning call major centralizer regular therefore see theory regular xip holds also order except one group wreath product two groups order xip xip termed exceptional metabelian exceptional group maximal subgroups different maximal class finally maximal class iff contains element next groups derived subgroup index since normal subgroups index contain commutator subgroup follows normal subgroup index factor group elementary abelian say elements independent modulo generate factor group generated image therefore index normal subgroup index similarly generated images therefore index either first case normal subgroup index recall denotes minimal number generators group theorem let let maximal subgroup maximal class hence equals either wrcp elementary abelian order maximal subgroups save one satisfy another maximal subgroup particular contains maximal subgroup either metacyclic contains one maximal subgroup note first claim already pointed exercise proof may well assume element commutation induces endomorphism image kernel since moreover argument shows quotients centres order hence maximal class maximal class order least since abelian major centralizer known groups subgroup index usually exception equality occurring wrcp elementary abelian order cases logp logp case strict inequality possible moreover case known groups maximal class orders satisfy proves character degrees write maximal subgroup let another maximal subgroup lemma therefore since possible since since argument implies maximal class see theorem shows maximal subgroup maximal class let two subgroups lying properly let normal maximal contradiction thus subgroups like determine distinct maximal subgroups maximal subgroup obtained way moreover thus therefore prove suppose contains maximal subgroup two maximal subgroups whose commutator factor groups orders least either cyclic metacyclic thus may assume write hand since group maximal class order least maximal subgroup also maximal class order least implies therefore similarly leads contradiction finally let one two groups order two one exponent central factor group group incapable therefore exponent claim hence proposition let subgroup index maximal subgroups three generators either one metacyclic contains maximal subgroup maximal class proof either stated theorem derived proof part last claim case fact factor group maximal class proposition let order least maximal subgroups one exception irreducible characters degree exceptional maximal subgroup exists maximal subgroups satisfy proof since irreducible characters degree proposition maximal subgroup characters another maximal subgroup means shows irreducible characters degree qed exceptional maximal subgroup may may exist maximal class metabelian subgroup abelian avinoam mann irreducible characters maximal subgroups degrees hand groups maximal class order constructed ones order constructed slattery difficult show maximal subgroups irreducibles degrees proofs theorems separate groups maximal class others proof theorem groups maximal class theorem may assume famous result characters degrees iff either contains abelian maximal subgroup groups maximal class first possibility means anyway abelian maximal subgroup moreover abelian maximal subgroup subgroup must thus assumption characters degree deg means properly contained therefore irreducibles degree exceeding therefore irreducible characters degrees degrees next let characters degree deg irreducibles degree thus prove claim may assume according characters degree odd iff one following four possibilities occur contains abelian subgroup index contains maximal subgroup iii maximal subgroup group maximal class impossible maximal subgroup iii means means either exceptional group order means abelian thus irreducible character degree least note assumption implies two indices first assume let last inequality shows therefore thus violates irreducible character degree least thus characters degrees characters degrees take obtain proceed proof rest theorem theorem since maximal class neither let first index thus theorem shows maximal subgroup fact characters degree implies therefore assume first let character degrees maximal subgroup abelian centre index therefore irreducible character deg hand therefore deg let index let proceed saw theorem two maximal subgroups implies thus abelian maximal subgroups normal subgroup index follows irreducible characters degree bigger degree must abelian subgroup index qed references groups prime power order vol gruyter berlin special class acta math endliche gruppen springer berlin character theory finite groups academic press san diego sets irreducible character degrees proc amer math soc lgm structure groups prime power order oxford university press oxford minimal characters normally monomial preparation groups whose irreducible representations degrees dividing pac math character degrees normally monomial maximal class character theory finite groups isaacs conference contemporary mathematics american mathematical society providence maximal class large character degree gaps preprint | 4 |
preprint version semantical identifier using radial basis neural networks reinforcement learning napoli pappalardo tramontana sep published proceedings workshop dagli oggetti agli agenti bibitex inproceedings http proceedings workshop dagli oggetti agli agenti semantical identifier using radial basis neural networks reinforcement learning napoli christian pappalardo giuseppe tramontana emiliano published version copyright uploaded policies semantical identifier using radial basis neural networks reinforcement learning christian napoli giuseppe pappalardo emiliano tramontana department mathematics informatics university catania viale doria catania italy napoli pappalardo tramontana huge availability documents digital form deception possibility raise bound essence digital documents way spread authorship attribution problem constantly increased relevance nowadays authorship attribution information retrieval analysis gained great importance context security trust copyright preservation work proposes innovative driven machine learning technique developed authorship attribution means preprocessing timeperiod related analysis common lexicon determine bias reference level recurrence frequency words within analysed texts train radial basis neural networks rbpnn classifier identify correct author main advantage proposed approach lies generality semantic analysis applied different contexts lexical domains without requiring modification moreover proposed system able incorporate external input meant tune classifier means continuous learning reinforcement ntroduction nowadays automatic attribution text author assisting information retrieval analysis become important issue context security trust copyright preservation results availability documents digital form raising deception possibilities bound essence digital reproducible contents well need new mechanical methods organise constantly increasing amount digital texts last decade field text classification attribution undergone new developement due novel availability computational intelligence techniques natural language processing advanced data mining information retrieval systems machine learning artificial intelligence techniques agent oriented programming etc among techniques computer intelligence evolutionary computation methods largely used optimisation positioning problems agent driven clustering used advanced solution optimal management problems whereas problems solved mechatronical module controls agent driven artificial intelligence often used combination advanced data analysis techniques order create intelligent control systems means multi resolution analysis parallel analysis systems proposed order support developers classification analysis applied assist refactoring large software systems moreover techniques like neural networks nns used order model electrical networks related controls starting classification strategies well complex physical systems using several kinds hybrid approaches said works use different forms modeling clustering recognition purposes methods efficiently perform challenging tasks common computational methods failed low efficiency simply resulted inapplicable due complicated model underlying case study general machine learning proven promising field research purpose text classification since allows building classification rules means automatic learning taking basis set known texts trying generalise unknown ones machine learning nns promising field effectiveness approaches often lies correct precise preprocessing data definition semantic categories affinities rules used generate set numbers characterising text sample successively given input classifier typical text classification using nns takes advantage topics recognition however results seldom appropriate comes classify people belonging social group involved similar business classification texts different scientists field research politicians belonging party texts authored different people using technical jargon approach devise solution extracting analysed texts characteristics express style specific author obtaining kind information abstraction crucial order create precise correct classification system hand data abound context text analysis robust classifier rely input sets compact enough apt training process therefore data reflect averaged evaluations concern anthropological aspects historical period ethnicity etc work satisfies conditions extracting compact data texts since use preprocessing tool related analysis common lexicon tool computes bias reference system recurrence frequency word used analysed texts main advantage choice lies generality implemented semantical reference database text database training set known preprocessing biasing new data unknown rbpnn reinforcement learning local external text database wordnet lexicon fig general schema data flow agents developed system identifier applied different contexts lexical domains without requiring modification moreover order continuous updates complete renewals reference data statically trained would suffice purpose work reasons developed system able means continuous learning reinforcement proposed architecture also diminishes human intervention time thanks properties solution comprises three main collaborating agents first preprocessing extract meaningful data texts second classification means proper radial basis rbnn finally one adapting means feedforward rest paper follows section gives details implemented preprocessing agent based lexicon analysis section iii describes proposed classifier agent based rbnns introduced modifications structure reinforcement learning agent section reports performed experiments related results finally section gives background existing related works section draws conclusions algorithm find group word belongs count occurrences start import speech ext load dictionary ords load group database groups thisw ord thisw ord thisgroup thisw ord thisgroup load different lexicon thisw ord end else break end end thisw ord thisgroup end thisw ord end export ords groups stop fundamental steps said analysis see also algorithm followings xtracting semantics lexicon figure shows agents developed system preprocessing agent extracts characteristics given text parts see text database figure according known set words organised groups see reference database rbpnn agent takes input extracted characteristics properly organised performs identification new data appropriate training additional agent dubbed adaptive critic shown figure dynamically adapts behaviour rbpnn agent new data available firstly preprocessing agent analyses text given input counting words belong priori known groups mutually related words groups contain words pertain given concern built according semantic relations words hence assisted wordnet http import single text file containing speech import word groups predefined database set containing words group called dictionary compare word text words dictionary word exists dictionary relevant group returned word found search available lexicon word exists lexicon related group identified word unkown new lexicon loaded word found dictionary groups updated search occurrences word text occurrence found remove text increase group counter figure shows uml class diagram software system performing analysis class text holds text analysed class words represents known dictionary known words organised groups given class groups class lexicon holds several dictionaries iii rbpnn classifier agent work proposed use variation radial basis neural networks rbnn rbnns topology similar common feedforward neural networks ffnn backpropagation training algorithms bpta primary lexicon text get exist get words search update groups filter service search count update ffnn rbnn pnn rbpnn fig uml class diagram handling groups counting words belonging group difference lies activation function instead sigmoid function similar activation function statistical distribution statistically significant mathematical function selection transfer functions indeed decisive speed convergence approximation classification problems kinds activation functions used probabilistic neural networks pnns meet important properties preserve generalisation abilities anns addition functions preserve decision boundaries probabilistic neural networks selected rbpnn architecture shown figure takes advantage pnn topology radial basis neural networks rbnn used neuron performs weighted sum inputs passes transfer function produce output occurs neural layer ffnn network perceived model connecting inputs outputs weights thresholds free parameters model modified training algorithm networks model functions almost arbitrary complexity number layers number units layer determining function complexity ffnn capable generalise model separate input space various classes variable space equivalent separation different case ffnn create general model entire variable space insert single set inputs categories hand rbnn capable clustering inputs fitting class means radial basis function model general entire variable space capable act single variables variable space locates closed subspaces without inference remaining space outside subspaces another interesting topology provided pnns mainly ffnns also functioning bayesian networks fisher kernels replacing sigmoid activation function often used neural networks exponential function pnn compute nonlinear decision boundaries approaching bayes optimal classification moreover pnn generates accurate predicted target probability scores probabilistic meaning space equivalent attribute probabilistic score chosen points figure represented size points finally presented approach decided combine advantages rbnn pnn using called fig comparison results several types nns rbpnn includes maximum probability selector module rbpnn rbpnn architecture preserving capabilities pnn due topology capable statistical inference also capable clustering since standard activation functions pnn substituted radial basis functions still verifying fisher kernel conditions required pnn architecture variable space locate subspace points give probabilistic score figure shows representation behaviour network topology presented rbpnn structure topology rbpnn input first hidden layer exactly match pnn architecture input neurones used distribution units supply input values neurones first hidden layer historical reasons called pattern units pnn pattern unit performs dot product input pattern vector weight vector performs nonlinear operation result nonlinear operation gives output provided following summation layer common sigmoid function used standard ffnn bpta pnn activation function exponential neurone output exp represents statistical distribution spread given activation function modified substituted condition parzen window function still satisfied estimator order satisfy condition rules must verified chosen window function order obtain expected estimate expressed parzen window estimate means kernel space fig representation radial basis probabilistic neural network maximum probability selector module called window width bandwidth parameter corresponds width kernel general depends number available sample data estimator since estimator converges mean square expected value lim hpn lim var hpn represents mean estimator values var variance estimated output respect expected values parzen condition states convergence holds within following conditions sup lim lim hnd fig setup values proposed rbpnn number considered lexical groups number analysed texts number people possibly recognised authors units work neurones linear perceptron network training output layer performed rbnn however since number summation units small general remarkably less rbnn training simplified speed greatly increased output rbpnn shown figure given maximum probability selector module effectively acts output layer selector receives input probability score generated rbpnn attributes one author analysed text selecting probable author one maximum input probability score note links selector weighted weights adjusted training hence actual input product weight output summation layer rbpnn lim nhnd case preserving pnn topology obtain rbpnn capabilities activation function substituted radial basis function rbf rbf still verifies conditions stated follows equivalence vector weights centroids vector radial basis neural network case computed statistical centroids input sets given network name chosen radial basis function new output first hidden layer neurone parameter intended control distribution shape quite similar used second hidden layer rbpnn identical pnn computes weighted sums received values preceding neurones second hidden layer called indeed summation layer output summation unit wjk wjk represents weight matrix weight matrix consists weight value connection pattern units summation unit summation layer size rbpnn devised topology enables distribute different layers network different parts classification task pattern layer nonlinear processing layer summation layer selectively sums output first hidden layer output layer fullfills nonlinear mapping classification approximation prediction fact first hidden layer rbpnn responsibility perform fundamental task expected neural network order proper classification input dataset analysed texts attributed authors size input layer match exact number different lexical groups given rbpnn whereas size pattern units match number samples analysed texts number summation units second hidden layer equal number output units match number people interested correct recognition speakers figure reinforcement learning order continuously update reference database system statically trained would suffice purpose work since aim presented system expanding database text samples classification recognition purpose agent driven identification dynamically follow changes database new entry made related feature set biases change implies also rbpnn properly managed order ensure continuous adaptive control reinforcement learning moreover considered domain desirable human supervisor supply suggestions expecially system starts working human activities related supply new entries text sample database removal misclassifications made rbpnn used supervised control configuration see figure external control provided actions choices human operator rbpnn trained classical backpropagation learning algorithm also embedded reinforcement learning architecture back propagates learning evaluating correctness choices respect real word let error function results supported human verification vectorial deviance results supported positive human response assessment made agent named critic consider filtering step rbpnn output critic human supervisor acknowledging rejecting rbpnn classifications adaptive critic agent embedding long run simulates control activity made human critic hence decreasing human control time adaptive critic needs learn learning obtained modified backpropagation algorithm using error function hence adaptive critic implemented simple feedforward trained means traditional gradient descent algorithm weight modification activation neuron input neurone weighted wij result adaptive control determines whether continue training rbpnn new data whether last training results saved discarded runtime process results continuous adaptive learning hence avoiding classical problem polarisation overfitting figure shows developed learning system reinforcement according literature straight lines represent data flow training data fed rbpnn new data inserted supervisor output rbpnn sent critic modules also means delay operator functional modifications operated within system represented slanting arrows choices made human supervisor critic modify adaptive critic adjust weight combined output critic adaptive critic determines whether rbpnn undergo training epochs modify weights critic features data rbpnn fig adopted supervised learning model reinforcement slanting arrows represent internal commands supplied order control change status modules straight arrows represent data flow along model represents time delay module provides delayed outputs characteristics see section results given classification agent total number text samples used training classification agent validation text samples training validation different persons given speech einstein lewis shown figure given flexible structure implemented learning model word groups fixed modified added removed time external tuning activity using count words group instead counts system realises statistically driven classifier identifies main semantic concerns regarding text samples attributes concerns probable person relevant information useful order recognise author speech usually largely spread certain number word groups could indication cultural extraction heritage field study professional category etc implies exclude word group priori rbpnn could learn automatically enhance relevant information order classify speeches figure shows example classifier performances results generated rbpnn filter implemened probabilistic selector since rbpnn results probability shown performance text correctly attributed attributed specific person figure shows performances system including probabilistic selector case boolean selection involved correct identifications represented false positive identifications black marks missed identifications white marks validation purposes figure left right shows results according xperimental setup proposed rbpnn architecture tested using several text samples collected public speeches different people present past era text sample given preprocessing agent extract adaptive critic identifies performance classification result expected result lower negative values identify excess confidence attribution text person greater positive values identify lack confidence sense fig obtained performance classification system left right maximum probability selector choice mean grey color represents correct classifications white color represents missed classification black color false classifications system able correctly attribute text proper author missing assignments elated orks several generative models used characterise datasets determine properties allow grouping data classes generative models based stochastic block structures infinite hidden relational models mixed membership stochastic blockmodel main issue models type relational structure solutions capable describe since definition class generally reported models risk replicate existing classes new attribute added models would unable efficiently organise similarities classes cats dogs child classes general class mammals classes would replicated classification generates two different classes mammals class mammals cats class mammals dogs consequently order distinguish different races cats dogs would necessary multiply mammals class one identified race therefore models quickly lead explosion classes addition would either add another class handle specific use mixed membership model crossbred species another paradigm concerns latent feature relational model bayesian nonparametric model entity boolean valued latent features influence model relations relations depend covariant sets neither explicit known case study moment initial analysis authors propose sequential forward feature selection method find subset features relevant classification task approach uses novel estimation conditional mutual information candidate feature classes given subset already selected features used classifier independent criterion evaluating feature subsets data simulation battery energy storage used classification purposes recurrent nns pnns means theoretical framework based signal theory showing effectiveness neural network based approaches case study classification results given means probability hence use rbpnn training achieved reinforcement learning onclusion work presented system agent analyses fragments texts another agent consisting rbpnn classifier performs probabilistic clustering system successfully managed identify probable author among given list examined text samples provided identification used order complement integrate comprehensive verification system kinds software systems trying automatically identify author written text rbpnn classifier agent continuously trained means reinforcement learning techniques order follow potential correction provided human supervisor agent learns supervision developed system also able cope new data continuously fed database adaptation abilities collaborating agents reasoning based nns acknowledgment work supported project prime funded within por fesr sicilia framework project prisma funded italian ministry university research within pon framework eferences napoli pappalardo tramontana simplified firefly algorithm image search ieee symposium series computational intelligence ieee gabryel nowicki creating learning sets control systems using evolutionary method proceedings artificial intelligence soft computing icaisc ser lncs vol springer bonanno capizzi gagliano napoli optimal management various renewable energy sources new forecasting method proceedings international symposium power electronics electrical drives automation motion speedam ieee nowak analysis active module mechatronical systems proceedings mechanika icm kaunas lietuva kaunas university technology press napoli pappalardo tramontana hybrid predictor qos control stability proceedings advances artificial intelligence springer bonanno capizzi sciuto napoli pappalardo tramontana novel toolbox optimal energy dispatch management renewables igss using wrnn predictors gpu parallel solutions power electronics electrical drives automation motion speedam international symposium ieee nowak multiresolution derives analysis module mechatronical systems mechanika vol napoli pappalardo tramontana using modularity metrics assist move method refactoring large systems proceedings international conference complex intelligent software intensive systems cisis ieee pappalardo tramontana suggesting extract class refactoring opportunities measuring strength method interactions proceedings asia pacific software engineering conference apsec ieee december tramontana automatically characterising components concerns reducing tangling proceedings computer software applications conference compsac workshop quors ieee july doi napoli papplardo tramontana improving files availability bittorrent using diffusion model ieee international workshop enabling technologies infrastructure collaborative enterprises wetice june giunta pappalardo tramontana aspects annotations controlling roles application classes play design patterns proceedings asia pacific software engineering conference apsec ieee december calvagna tramontana delivering dependable reusable components expressing enforcing design decisions proceedings computer software applications conference compsac workshop quors ieee july doi giunta pappalardo tramontana aodp refactoring code provide advanced modularization design patterns proceedings symposium applied computing sac acm tramontana detecting extra relationships design patterns roles proceedings asianplop march capizzi napoli innovative hybrid neurowavelet method reconstruction missing data astronomical photometric surveys proceedings artificial intelligence soft computing icaisc springer bonanno capizzi sciuto napoli pappalardo tramontana cascade neural network architecture investigating surface plasmon polaritons propagation thin metals openmp proceedings artificial intelligence soft computing icaisc ser lncs vol springer napoli bonanno capizzi exploiting solar wind time series correlation magnetospheric response using hybrid approach proceedings international astronomical union vol capizzi bonanno napoli hybrid neural networks architectures soc voltage prediction new generation batteries storage proceedings international conference clean electrical power iccep ieee napoli bonanno capizzi hybrid approach prediction solar wind iau symposium capizzi bonanno napoli new approach batteries modeling local cosine power electronics electrical drives automation motion speedam international symposium june duch towards comprehensive foundations computational intelligence challenges computational intelligence springer capizzi bonanno napoli recurrent neural networkbased control strategy battery energy storage generation systems intermittent renewable energy sources proceedings international conference clean electrical power iccep ieee haykin neural networks comprehensive foundation prentice hall mika ratsch jason scholkopft muller fisher discriminant analysis kernels proceedings signal processing society workshop neural networks signal processing ieee specht probabilistic neural networks neural networks vol deshuang songde new radial basis probabilistic neural network model proceedings conference signal processing vol ieee zhao huang guo optimizing radial basis probabilistic neural networks using recursive orthogonal least squares algorithms combined algorithms proceedings neural networks vol ieee prokhorov santiago adaptive critic designs case study neurocontrol neural networks vol online available http javaherian liu kovalenko automotive engine torque ratio control using dual heuristic dynamic programming proceedings international joint conference neural networks ijcnn widrow lehr years adaptive neural networks perceptron madaline backpropagation proceedings ieee vol sep park harley venayagamoorthy optimal neurocontrol synchronous generators power system using neural networks ieee transactions industry applications vol sept nowicki snijders estimation prediction stochastic blockstructures journal american statistical association vol tresp peter kriegel infinite hidden relational models proceedings international conference uncertainity artificial intelligence uai airoldi blei xing fienberg mixed membership stochastic block models advances neural information processing systems nips curran associates miller griffiths jordan nonparametric latent feature models link prediction advances neural information processing systems nips curran associates vol somol haindl pudil conditional mutual information based feature selection classification task progress pattern recognition image analysis applications springer bonanno capizzi napoli remarks application rnn prnn simulation advanced battery energy storage proceedings international symposium power electronics electrical drives automation motion speedam ieee | 9 |
traces proofs proving concurrent programs safe apr chinmay subodh shibashis department computer science engineering indian institute technology delhi email chinmay svs shibashis sak flagi alse flagj alse scheduling cardinal reason difficulty proving correctness concurrent programs powerful proof strategy recently proposed show correctness programs approach captured dataflow dependencies among instructions interleaved execution threads dependencies represented inductive graph idfg nutshell denotes set executions concurrent program gave rise discovered dependencies idfgs transformed alternative finite automatons afas order utilize efficient tools solve problem paper give novel efficient algorithm directly construct afas capture dependencies concurrent program execution implemented algorithm tool called prooftrapar prove correctness finite state cyclic programs sequentially consistent memory model results encouranging compare favorably existing tools hile true flagi flag true section flagi hile true flagj flagi true section flagj fig peterson algorithm two processes shown hold true unbounded number traces trace sequence events corresponding interleaved execution processes program generated due unbounded number unfoldings loops notice events control locations events control locations respectively finite prefix trace interleaved execution events corresponding control location last instance event control location last instance event control location ordered one following two ways either appears appears resulted partitioning unbounded set traces set mere two traces appears final value variable thus making condition control location true case appears final value variable thereby making condition control location evaluate true hence trace conditions false simultaneously informal reasoning indicates processes never simultaneously enter critical sections thus proof correctness peterson algorithm demonstrated picking two traces mentioned set infinite traces proving correct general intuition proof single trace program result pruning large set traces consideration convert intuition feasible verification method need construct formal structure proof trace semantics structure includes set traces proof arguments equivalent proof inductive data flow graphs idfg proposed capture among events trace perform trace partitioning traces idfg ntroduction problem checking whether correctness property specification violated program implementation already known challenging sequential let alone programs implemented exploiting concurrency central reason greater complexity verification concurrent implementations due exponential increase number executions concurrent program threads instructions per thread executions sequentially consistent memory model common approach address complexity due exponential number executions trace partitioning powerful proof strategy presented utilized notion trace partitioning let take peterson algorithm figure convey central idea behind trace partitioning approach algorithm two processes coordinate achieve exclusive access critical section using shared variables process expressed interest enter order prove mutual exclusion property peterson algorithm must consider boolean conditions loops control locations property established one conditions false every execution program must lines allowed use sequential verification method construct proof given trace paper comment performance feasibility approach due lack implementation second contribution paper implementation form tool prooftrapar compare implementation tools domain threader lazycseq winners concurrency category software verification competitions held prooftrapar average performed order magnitude better threader times better paper organized follows section covers notations definitions programming model used paper section iii presents approach help example convey overall idea describes detail algorithms constructing proposed alternating finite automaton along correctness proofs section ends overall verification algorithm proof soundness completeness finite state concurrent programs section presents experimental results comparison existing tools namely threader section presents related work section concludes possible future directions init true abc bac abc bac acb cab bca cba fig comparison must proof correctness every iteration approach trace picked set traces yet covered idfg idfg constructed proof process repeated traces either covered idfg found intervening step involved idfg converted alternating finite automaton afa explain afa later sections suffices understand stage language accepted afa set traces captured corresponding idfg reason conversion leverage use operations subtraction complement set traces though goal paper verification concurrent programs work work crucial differences afa constructed directly proof trace without requiring idfg construction verification procedure built directly constructed afa shown sound complete used obtain proof correctness trace iii best knowledge provide first implementation proof strategy discussed example trace figure highlights key difference idfg afa conversion direct approach presented work note three events data independent hence every resulting trace permuting events abc also satisfies set hoare triple abc figure shows set traces admitted afa obtained idfg shown figure first iteration computed set clearly represent every permutation abc consequently iterations required converge afa represents traces admissible set contrast afa constructed directly approach hoare triple abc admits set traces shown figure hence example strategy terminates single iteration summarize contributions work follows reliminaries program model consider concurrent programs composed fixed number deterministic sequential processes finite set shared variables concurrent program quadruple finite set processes set automata one process specifying behaviour finite set constants appearing syntax processes function variables initial values process disjoint set local variables lvp let expp bexpp denote set expressions boolean expressions ranged exp constructed using shared variables local variables standard mathematical operators specification automaton quadruple qpinit assrnp finite set control states qpinit initial state assrnp relation specifying assertions must hold control state transition form opp opp assume lock evaluates exp current state assigns value lvp assume blocking operation suspends execution boolean expression evaluates false otherwise acts nop instruction used encode control path conditions program lock blocking operation suspends execution value equal otherwise assigns operation unlock achieved assigning shared variable operations deterministic nature present novel algorithm directly construct afa proof sequential trace finite state possibly cyclic concurrent program construction used give sound complete verification procedure along assume assume assume assume assume def def def assert def def lock def assume def operation terminates resulting program state satisfies given formula variable expression let denote formula obtained substituting free occurrences assume equality operator formulae represents syntactic equality every formula assumed normalized conjunctive normal form cnf use true false syntactically represent logically valid unsatisfiable formula weakest precondition axioms different program statements shown figure empty sequence statements denote skip following properties weakest preconditions property note property holds deterministic operation true programming model property let formulas logically implies every operation formula logically implies say formula stable respect statement logically equivalent paper use weakest preconditions check correctness trace respect safety assertion trace reaching safety assertion safe execution starting initial state either blocks terminate satisfying path conditions terminates resulting state satisfies following lemmas clearly define conditions using weakest precondition axioms declaring trace either safe unsafe detailed proofs given appendix denote trace obtained replacing every instruction form assume assert lemma trace initial program state safety property unsatisfiable execution starting either terminate terminates state satisfying lemma trace initial program state safety property satisfiable execution starting terminates state satisfying def fig weakest precondition axioms def skip def true turn true turn fig specification peterson algorithm execution two operations states always give behaviour examples paper use symbolic labels succinctly represent program operations example figure shows specification two processes peterson algorithm labels denote operations program variable res introduced specify mutual exclusion property safety property process sets variable inside critical section assertions assert res checked leaving critical section assertions hold every execution two processes mutual exclusion property holds assertions shown figure need checked state respectively tuple say elements represented function returns element tuple given function denotes another function except returns parallel composition memory model given concurrent program consisting processes define automaton init assrn represent parallel composition memory model qpn set states ranged init qpinit qpinit initial state transition relation models interleaving semantics formally opj iff exists qpj qpj opj state let assrnpi empty assrn conjunction assertions set relation assrn captures assertions need checked interleaved traces interest lies analyzing traces reach control points assertions specified mark states relation assrn defined accepting states every word accepted represents one execution leading control location least one assertion checked alternating finite automata afa weakest precondition alternating finite automata generalization nondeterministic finite automata nfa nfa five tuple set states ranged initial state set accepting states transition function state nfa given operation postcondition formula weakest precondition respect denoted weakest formula starting program state satisfies execution set words accepted inductively defined acc acc acc existential quantifier represents fact exist least one outgoing transition along gets accepted afa six tuple denoting alphabet initial state set accepting states respectively set states ranged transition function set words accepted state afa depends whether state existential state set universal state set existential state set accepted words inductively defined way nfa universal state set accepted words acc acc acc notice change quantifier diagrams afa used paper annotate universal states symbol existential states symbol state let succ set automaton let language accepted initial state automaton denote length rev denote reverse amap existential state rmap longest sequence amap amap iteral ssn amap existential state iteral elf ssn amap rmap rmap ompound ssn otherwise fig transition function used definition alphabet ranged set instructions used program symbol acts identity element concatenation largest set states ranged every state annotated formula prefix denoted amap rmap respectively state initial state amap rmap iff either following two conditions hold amap amap rmap rmap largest suffix rmap formula amap stable respect amap amap rmap rmap amap state existential state universal state iff amap literal compound formula set accepting states iff rmap amap amap amap stable respect rmap function defined figure following point state added either annotated smaller rmap smaller formula compared states already present every formula trace finite length hence set states finite point construction state amap compound formula always universal state irrespective whether amap conjunction disjunction clauses reason behind decision clear shortly use afa inductively construct weakest precondition note assume every formula normalized cnf figure shows example trace abapqprcs peterson algorithm trace picked peterson iii pproach overall approach paper described following steps given concurrent program construct interleaved traces represented automaton defined subsection pick trace safety property say prove trace iii prove correct respect using lemma lemma generate set traces also provably correct let call set remove set set traces represented repeat step either traces proved correct erroneous trace found step iii procedure correctness achieved checking unsatisfiability however interested checking correctness also constructing set traces similar reasoning therefore instead computing directly weakest precondition axioms figure construct afa step achieved applying automatatheoretic operations complementation subtraction afa notion universal existential states afa helps finding set sufficient dependencies used weakest precondition computation trace satisfying dependencies gets captured afa subsequent subsections covers construction properties use afa detail constructing afa trace formula definition afa constructed trace program formula amap abapqprcs true abapqpr false abapq false turn false false res abapqprc abapq false turn abapq turn false false turn turn false false abap fig afa trace given figure false assume turn false assume turn fig trace peterson algorithm hmap specification figure prove correct respect def safety formula first construct later help derive afa shown figure state amap written inside rectangle representing state rmap written inside ellipse next state show steps illustrating construction definition amap rmap abapqprcs initial state transition created rule iteral ssn state annotated weakest precondition operation taken rmap respect amap operation picked way amap stable respect every operation present rmap transitions capture inductive construction weakest precondition given trace transition figure created rule amap amap rmap rmap transition created rule ompound ssn say states annotated subformulae amap example transitions transition follows rule iteral ssn note rmap empty hence point definition accepting state following reasoning states also set accepting states rule iteral elf ssn adds self transition state symbol amap stable respect example transitions following lemma relates rmap state set words accepted afa lemma given let afa satisfying definition every state afa condition rev rmap acc holds detailed proof lemma given appendix lemma uses reverse rmap statement amap hmap hmap hmap base case amap amap onj case amap amap isj case case fig rules hmap construction weakest precondition sequence constructed scanning end seen transition rule iteral ssn corollary rev also accepted afa definition rmap constructing weakest precondition constructing rules given figure used inductively construct assign formula hmap every state figure shows afa figure states annotated formula hmap formula shown ellipse beside every state better readability show rmap figure following rule base case hmap set false whereas hmap set false rule case hmap also set false applying rule isj case transition hmap set false similarly using rule onj case get hmap false finally hmap also set false hmap constructed inductively manner satisfies following property lemma let afa constructed trace post condition definition every state afa every word accepted state hmap logically equivalent rev amap present proof outline detailed proof given appendix first consider accepting states example states figure following definition accepting state adding transition rule iteral elf ssn false res false true false false false false turn false turn false false false false false false false turn turn false false false turn false false false false false false algorithm converting universal existential states preserving lemma data input afa amap result modified afa let state afa hmap unsatisfiable amap amap let unsatcore unsatcore iff hmap hmap minimal unsat core hmap create empty set foreach unsatcore create new universal state add set set amap amap set hmap hmap add transition setting end remove transition convert existential state add transition setting set universal states created one element unsatcore fig hmap construction running example enlarging set words accepted ery word accepted accepting state satisfies rev amap amap therefore setting hmap amap accepting states done rule base case completes proof accepting states converting universal states existential states figure shows example trace abcde obtained parallel composition program figure shows afa constructed lemma get false note unsatisfiable two ways derive unsatisfiability one due operation due operation followed operation example word enforces either two ways derive false weakest precondition example sequence adcbe accepted afa figure condition rev false follows false already captured afa figure note states figure annotated unsatisfiable hmap assertion seems sufficient take one branches argue unsatisfiability hmap hmap definition conjunction hmap hmap therefore convert universal state existential state modified afa accept adcbe let look algorithm see steps involved transformation algorithm picks universal state amap conjunction clauses subset successors sufficient make hmap unsatisfiable state figure one state minimal subsets successors algorithm creates universal state shown line algorithm easy see hmap also unsatisfiable adding transition afa algorithm sets amap amap construction every word accepted must accepted states satisfy lemma hence lemma continues hold newly created universal states well consider newly created transition line state amap logically implies amap represents subset original successors consider state transition created using rule ompound ssn let word accepted construction must universal state hence must accepted well using lemma inductively successor states induction formula size get amap hmap apply property depending whether amap conjunction disjunction amap replacing amap amap amap hmap hmap hmap completes proof note making universal state amap either conjunction disjunction allowed use property proof otherwise make existential state amap disjunction formulae prove lemma states hmap constructed using rule isj case lemma serves two purposes first checks correctness trace safety property afa constructed hmap unsatisfiable peterson example trace declared correct second guarantees every trace accepted afa present set traces also safe hence skip proving correctness altogether removing traces equivalent subtracting language afa language representing set traces natural question ask increase set accepted words afa preserving lemma false false false false false false false fig example trace false false true false false false res fig afa given figure iff false false false false false turn false false false false turn false false false false false turn fig afa figure modification algorithm algorithm check safety assertions concurrent program input concurrent program safety property map assrn result yes program safe else counterexample let bet automaton represents set executions defined section set tmp tmp empty let tmp safety assertion checked let afa constructed hmap satisfiable valid counterexample violating return else let afa modified proposed transformations tmp tmp rev rev rev end end return yes hmap hmap unsatisfiable literal amap amap rule nsat hmap hmap valid literal amap rule fig rules adding edges viz existential state word accepted say accepted least one state say using lemma hmap logically equivalent rev amap using unsatisfiability hmap hmap monotonicity property weakest precondition property get hmap logically equivalent rev amap transformation formally proved correct appendix adding transitions using monotonicity property weakest precondition modify adding transitions two states amap amap literals hmap hmap unsatisfiable exists symbol well amap logically implies amap edge labeled added transformation also preserves lemma following monotonicity property property used previous transformation similar argument holds hmap hmap valid amap amap holds rules adding edges shown figure figure shows afa figure modified transformations rule rule nsat adds edge symbol hmap hmap unsatisfiable amap logically implies amap rule also adds self loop operation self loop operation transformation algorithm removes transition states reachable consider trace rev abpqparcs accepted modified afa figure accepted original afa figure note abpqparcs unsatisfiable direct consequence lemma transformations presented need reason trace separately transformation formally proved correct appendix putting things together safety verification algorithm steps combined check executions concurrent program satisfy safety properties specified assertions proof following theorem given appendix theorem let finite state program without loops associated assertion maps assrnpi assertions program hold iff algorithm returns yes algorithm returns word least one assertion fails execution program prooftrapar threader handle larger number interleavings optimizations also selectively check representative set traces among set interleavings por based methods traditionally used bug finding recently extended efficiently using abstraction interpolants proving programs correct technique presented paper using afa possibly used keep track partial orders por based methods formalism called concurrent trace program ctp defined capture set interleavings corresponding concurrent trace ctp captures partial orders encoded trace corresponding ctp formula defined satisfiable iff feasible linearization partial orders encoded ctp violates given property afa also constructed trace unlike ctp captures different interleavings guarantee proof outline recently formalism called proposed capture set relations set executions relation used multiple tasks synchronization synthesis bug summarization predicate refinement since afa constructed algorithm also represented boolean formula universal states correspond conjunction existential states correspond disjunction encodes ordering relations among participating events interesting explore usages afa along lines fig comparison threader time seconds xperimental valuation implemented approach prototype tool prooftrapar tool reads input program written custom format future plan use parsers cil llvm remove dependency individual processes represented using finite state automata use automata library libfaudes carry operations automata library provide operations afa mainly complementation intersection implemented tool constructing afa trace first remove transitions afa followed adding additional edges afa using proposed transformations instead reversing afa line algorithm subtract nfa represents reversed language set traces avoids need reversing afa note convert afa nfa rather carry intersection complementation operations needed language subtraction operation directly afa tool uses theorem prover check validity formulae afa construction prooftrapar accessed repository https figure tabulates result verifying pthreadatomic category benchmarks using tool threader tools winners concurrency category software verification competition threader dash denotes tool finish analysis within minutes numbers bold text denote best time experiment versions programs labeled except lock unsafe version qrcu quick read copy update tool performed better two tools unsafe versions approach took time find erroneous trace compared exploration presence bugs shallow depth seem possible reason behind performance difference introducing priorities picking traces order make approach efficient left open future work onclusion uture ork presented trace partitioning based approach verifying safety properties concurrent program end introduced novel construction alternating finite automaton capture proof correctness trace program also presented implementation algorithm compared competitively existing tools plan extend approach parameterized programs programs relaxed memory models also plan investigate use interpolants weakest precondition axioms incorporate abstraction handling infinite state programs eferences brzozowski ernst equations regular languages finite automata sequential networks tcs clarke henzinger radhakrishna ryzhyk samanta tarrach preemptive scheduling using synchronization synthesis cav chandra kozen stockmeyer alternation acm january moura efficient smt solver tacas pages bernd opitz event system library farzan kincaid podelski inductive data flow graphs popl pages flanagan godefroid dynamic reduction model checking software popl pages godefroid methods verification concurrent systems approach problem springer gupta henzinger radhakrishna samanta tarrach succinct representation concurrent trace sets popl gupta popeea rybalchenko threader verifier programs cav pages elated ork verifying safety properties concurrent program well studied area automated verification tools use model checking based approaches employ optimizations partial order reductions por ppendix inverso tomasco fischer torre parlato bounded model checking programs via lazy sequentialization cav volume lncs pages springer lamport make multiprocessor computer correctly executes multiprocess programs ieee trans september peled one one model checking using representatives cav pages wachter kroening ouaknine verifying software impact fmcad pages ieee wang kundu ganai gupta symbolic predictive analysis concurrent programs ana cavalcanti dennisr dams editors formal methods volume lncs pages springer berlin heidelberg proof lemma prove induction base case unsatisfiable satisfies hence proved induction step let unsatisfiable following cases happen based unsatisfiable also unsatisfiable substituting get unsatisfiable using implies executing resultant state either terminate terminates state satisfying terminate execuction starting terminates state satisfying definition weakest precondition execution state satisfy hence proved assume unsatisfiable also unsatisfiable substituting get unsatisfiable using implies executing resultant state either terminate terminates state satisfying terminate execution terminate well terminates state satisfying execution blocks hence execution terminate terminates state satisfying hold must hold execution assume acts nop instruction resultant state satisfies hence proved lock weakest precondition lock obtained weakest precondition assignment assume instruction hence similar reasoning works case proof lemma proof let prove induction length base case length satisfiable satisfy hence proved induction step let following case happen based type satisfiable also satisfiable substituting get satisfiable execution terminates state satisfying definition weakest precondition state reached executing state satisfy hence proved assume satisfiable assume also satisfiable substituting assume get satisfiable execution terminates state satisfying words holds state reached executing therefore executing assume resultant state satisfies hence proved lock combination two cases accepting state successor state transition transition rule iteral ssn rmap rmap amap amap transition rule iteral elf ssn self loop transitions symbols applying gives rev rmap acc transition acc along gives rmap acc rearranging using get rev rmap acc equivalently rev rmap acc hence proved proof lemma proof use induction proof previous proof let use following ordering states two states lengths amap sub formula amap two states related order put order make total order clear smallest state total order must one accepting state ready proceed induction using total order base case definition accepting state afa construction point definition self loop transition rule rule iteral elf ssn know every word acc amap amap rule base case figure sets hmap amap states hence statement lemma follows accepting states induction step pick state one following holds universal state construction states transition let word accepted definition accepting set words universal states must accepted induction ordering smaller hence apply get rev amap hmap two cases arise based whether amap conjunction amap following rule onj case set hmap hmap rev amap hmap follows property using conjunction weakest precondition amap disjunction amap following rule onj case set hmap hmap rev amap hmap follows property using disjunction weakest precondition proof lemma proof use induction proof let use following ordering states two states lengths amap sub formula amap two states related order put order make total order clear smallest state total order must one accepting state ready proceed induction using total order base case every accepting state point definition condition amap amap holds every rmap transition rule iteral elf ssn afa self transition must rmap hence condition rev rmap acc holds transitions taken order construct required word induction step following possibilities exist state universal state construction states transition induction ordering smaller hence apply get rev rmap acc however transition rule ompound ssn rmap rmap rmap hence rev rmap acc definition acc universal state acc intersection sets acc hence get required result viz rev rmap acc existential state accepting state base case holds consider case existential state accepting state argument used base case holds accepting state outgoing transition form rule iteral ssn consider word acc must form amap amap self transitions constructed rule iteral elf ssn acc therefore rev amap rev amap rev amap rev amap using rev amap using weakest precondition definition rev amap using transition rule iteral ssn acc hmap applying hmap hmap done rule case prove case well accepted state hmap logically equivalent rev amap proof result adding edges transformation use ordering among states done earlier proofs transition guarantee states set smaller hence possible apply directly therefore proof apply induction length accepted state induction step let acc either acc exists state acc amap amap based transition following added transformation virtue one following conditions hmap hmap unsatisfiable amap amap rule rule nsat rev amap logically equivalent hmap using property conjunction part assumption amap amap get rev amap unsatisfiable hmap using rev amap unsatisfiable hmap replacing get required proof hmap hmap valid amap amap rule rule rev amap logically equivalent hmap using property disjunction part assumption amap amap get rev amap valid hmap using replacing get required result hence proved transition already use reasoning used proof lemma show rev amap logically equivalent hmap similar argument goes proof lemma new transition gets added states result transformation proof correctness lemma let automaton constructed trace post condition defined definition modified algorithm every state afa every word accepted state hmap logically equivalent rev amap proof proof lemma similar proof lemma given appendix highlight changes proof note transformation converts universal states existential states let one state converted universal existential state let original transition afa got modified sun sui newly created universal states line algorithm construction hmap sui unsatisfiable sun let word accepted converting existential state acceptance conditions must accepted least one state say sum set sun sum get amap sum hmap sum construction amap implies amap sum fact along monotonicity property weakest precondition property get amap unsatisfiable hence hmap proof correctness lemma let trace post condition modified adding every state automaton constructed defined definition edges discussed afa every word proof theorem proof let first prove algorithm terminates finite state programs finite state programs number possible assertions used construction afa finite hence finite number different afa possible implies termination algorithm following lemma fact amap every word accepted afa equivalently written acc satisfies rev hmap lemma fact rmap get rev acc combining get rev rev hmap equivalently hmap hmap satisfiable line satisfiable well following lemma got valid error trace returned line hmap unsatisfiable lemma trace provably correct apply transformations section afa increase set words accepted final afa reversed subtracted set executions seen far lemma ensures words condition holds therefore none violate starting initial state therefore every iteration correct set executions removed set executions therefore loop terminates executions proved correct | 6 |
results expansions apr ulyanov aoshima fujikoshi abstract get computable error bounds generalized expansions quantiles statistics provided computable error bounds type expansions distributions statistics known results illustrated examples introduction main results statistical inference fundamental importance obtain sampling distribution statistics however often encounter situations exact distribution obtained closed form even obtained might little use complexity one practical way getting around problem provide reasonable approximations distribution function quantiles along extra information possible errors made help type expansions recently interest type expansions stirred intensive study var value risk models financial mathematics financial risk management see mainly studied asymptotic behavior expansions mentioned means accuracy approximation distribution statistics quantiles given form order respect parameter usually number observations dimension observations paper construct error bounds words computable error bounds type expansions error approximation prove upper bounds dependence perhaps moment characteristics observations get bounds condition similar nonasymptotic results already known accuracy approximation distributions statistics type expansions let univariate random variable continuous distribution function exists key words phrases computable bounds results cornishfisher expansions work supported rscf grant ulyanov aoshima fujikoshi called lower point strictly increasing inverse function well defined point uniquely determined also speak quantiles without reference particular values meaning values given even general case necessarily continuous strictly increasing define inverse function formula inf nondecreasing function defined interval let sequence distribution functions let admit type expansion ece powers density function limiting distribution function important approach problem approximating quantiles use asymptotic relation let corresponding quantiles respectively write denote solutions terms terms respectively use ece obtain formal solutions form cornish fisher obtained first terms expansions standard normal distribution function called expansions cfe concerning cfe random variables obeying limit laws family pearson distributions see hill davis gave general algorithm obtaining term cfe analytic function usually cfe applied following form results expansions known see find explicit expressions soon taylor expansions obtain provided smooth enough functions following theorems show could expressed terms moreover show kind bounds get soon bounds theorem suppose distribution function statistic remainder term exists constant let upper points respectively density function limiting distribution min theorem notation theorem assume remainder term exists constant let monotone increasing transform let upper points respectively min ulyanov aoshima fujikoshi theorem use notation theorem let function inverse max moreover remark main assumption theorems distributions statistics distributions transformed statistics approximations computable error bounds many papers kind results requires technique different asymptotic results methods series papers got results wide class statistics including multivariate scale mixtures manova tests considered well case high dimensions case dimension observations sample size comparable results included book see also remark results theorems could extended whole range follows fact expansion converge uniformly see corresponding example section remark theorem required existence monotone increasing transform distribution transformed statistic approximated limit distribution better way distribution original statistic call transformation bartlett type correction see corresponding examples section remark according function theorem could considered asymptotic expansion order proofs main results proof theorem mean value theorem min definition get results expansions therefore hand follows implies similarly therefore proved theorem follows theorem min min thus using get statement theorem proof theorem easy see sufficient apply theorem transformed statistic proof theorem obtain using mean value theorem point interval min max theorem therefore get min max since properties derivatives inverse functions relations imply representation follows examples gave sufficient conditions transformation bartlett type correction see remark wide class statistics allowing following represantion distribution function chisquared distribution degrees freedom coefficients satisfy relation examples statistic ulyanov aoshima fujikoshi follows likelihood ratio test statistic trace criterion trace criterion test statistics multivariate linear hypothesis normality score test statistic hotelling statistic nonnormality results extended interested null distribution hotelling generalized statistic defined trsh independently distributed wishart distributions identity operator respectively theorem proved computable error bound constant gave expicit formula dependence therefore according take case bartlett type correction clear invertable apply theorem examples numerical calculations comparisons approximation accuracy see one example connected sample correlation coefficient two vectors let normal distribution zero mean identity covariance matrix sample correlation coefficient ppn proved supx results expansions easy see take bartlett type correction form inverse function defined formula apply theorem references bol shev asymptotically pearson transformations theor probab christoph ulyanov fujikoshi accurate approximation correlation coefficients short expansion statistical applications springer proceedings mathematics statistics cornish fisher moments cumulants specification distributions rev inst internat enoki aoshima transformations improved approximations proc res inst math kyoto enoki aoshima transformations improved asymptotic approximations accuracy sut journal mathematics fisher cornish percentile points distributions known cumulants amer statist fujikoshi ulyanov error bounds asymptotic expansions wilks lambda distribution journal multivariate analysis fujikoshi ulyanov accuracy approximations location scale mixtures journal mathematical sciences fujikoshi ulyanov shimizu multivariate statistics highdimensional approximations wiley series probability statistics john wiley sons hoboken fujikoshi ulyanov shimizu error bounds asymptotic expansions multivariate scale mixtures applications hotelling generalized journal multivariate analysis fujikoshi ulyanov shimizu error bounds asymptotic expansions distribution multivariate scale mixture hiroshima mathematical journal hall bootstrap edgeworth expansion new york hill davis generalized asymptotic expansions cornishfisher type ann math jaschke context approximations risk ulyanov aoshima fujikoshi ulyanov expansions international encyclopedia statistical science ulyanov fujikoshi approximations transformed distributions statistical applications siberian mathematical journal ulyanov fujikoshi accuracy improved georgian mathematical journal ulyanov fujikoshi shimizu nonuniform error bounds asymptotic expansions scale mixtures mild moment conditions journal mathematical sciences ulyanov wakaki fujikoshi bound high dimensional asymptotic approximation wilks lambda distribution statistics probability letters wakaki fujikoshi ulyanov asymptotic expansions distributions manova test statistics dimension large hiroshima mathematical journal ulyanov faculty computational mathematics cybernetics moscow state university moscow russia national research university higher school economics hse moscow russia address vulyanov aoshima institute mathematics university tsukuba tsukuba ibaraki japan address aoshima fujikoshi department mathematics hiroshima university japan address fujikoshi | 10 |
confidence score neural network classifiers sep amit mandelbaum school computer science engineering hebrew university jerusalem israel daphna weinshall school computer science engineering hebrew university jerusalem israel abstract scores used proxies example margin svm classifiers reliable measurement confidence classifiers predictions important many applications therefore important part classifier design yet although deep learning received tremendous attention recent years much progress made quantifying prediction confidence neural network classifiers bayesian models offer mathematically grounded framework reason model uncertainty usually come prohibitive computational costs paper propose simple scalable method achieve reliable confidence score based data embedding derived penultimate layer network investigate two ways achieve desirable embeddings using either loss adversarial training test benefits method used classification error prediction weighting ensemble classifiers novelty detection tasks show significant improvement traditional commonly used confidence scores trying evaluate confidence neural network classifiers number scores commonly used one strength activated output unit followed softmax normalization closely related ratio activities strongest second strongest units another negative entropy output units minimal units equally probable often however scores provide reliable measure confidence introduction classification confidence scores designed measure accuracy model predicting class assignment rather uncertainty inherent data generative classification models probabilistic nature therefore provide confidence scores directly discriminative models hand direct access probability prediction instead related important reliably measure prediction confidence various contexts medical diagnosis decision support systems important know prediction confidence order decide act upon example confidence certain prediction low involvement human expert decision process may called another important aspect real world applications ability recognize samples belong known classes also improved reliable confidence score even irrespective application context reliable prediction confidence used boost classifier performance via methods ensemble classification context better confidence score improve final performance classifier derivation good confidence score therefore part classifier design important component classifiers design order derive reliable confidence score classifiers focus attention empirical observation concerning neural networks trained classification shown demonstrate parallel useful embedding properties specifically common practice days treat one upstream layers network representation embedding layer layer activation used representing similar objects train simpler classifiers svm shallower nns perform different tasks related identical original task network trained confidence score neural network classifiers computer vision embeddings commonly obtained training deep network recognition large database typically imagenet deng embeddings shown provide better semantic representations images compared traditional image features number related tasks including classification small datasets sharif razavian image annotation donahue structured predictions given semantic representation one compute natural probability distribution described section estimating local density embedding space estimated density used assign confidence score test point using likelihood belong assigned class note however commonly used embedding discussed associated network trained classification may impede suitability measure confidence reliably fact training neural networks metric learning often used achieve desirable embeddings weston schroff hoffer ailon tadmor since goal improve probabilistic interpretation embedding essentially based local point density estimation distance points may wish modify loss function add term penalizes violation pairwise constraints hadsell experiments show modified network indeed produces better confidence score comparable classification performance surprisingly directly designed purpose show networks trained adversarial examples following adversarial training paradigm szegedy goodfellow also provide suitable embedding new confidence score first contribution therefore new prediction confidence score based local density estimation embedding space neural network score computed every network order score achieve superior performance necessary slightly change training procedure second contribution show suitable embedding achieved either augmenting loss function trained network term penalizes similarity loss using adversarial training importance latter contribution two fold firstly first show density image embeddings improved indirect adversarial training perturbations addition improved word embedding quality shown miyato direct adversarial training perturbations secondly show section adversarial training improves results imposing much lighter burden hyperparameters tune compared loss new confidence score evaluated comparison scores using following tasks performance binary classification task identifying class prediction correct incorrect see section training ensemble classifiers classifier prediction weighted new confidence score see section iii novelty detection confidence used predict whether test point belongs one known classes train set see section empirical evaluation method described section using datasets different network architectures used previous work using specific datasets method achieves significant improvement tasks compared recent method shown improve traditional measures classification confidence dropout gal ghahramani score achieves better results also maintaining lower computational costs prior work bayesian approach seeks compute posterior distribution parameters neural network used estimate prediction uncertainty mackay neal however bayesian neural networks always practical implement computational cost involved typically high accordance method referred gal ghahramani proposed use dropout test time bayesian approximation neural network providing cheap proxy bayesian neural networks lakshminarayanan proposed use adversarial training improve uncertainty measure entropy score neural network still basic one common confidence scores neural networks derived strength activated output unit rather normalized version also called softmax output max margin confidence score handles better situation one class probable negative entropy normalized network output zaragoza buc compared scores well complex ones tibshirani demonstrating somewhat surprisingly empirical superiority two basic methods described previous paragraph amit mandelbaum daphna weinshall ensembles models used improve overall performance final classifier see reviews dietterich many ways train ensemble boosting bagging also many ways integrate predictions classifiers ensemble including average prediction voting discussed bauer kohavi ensemble methods use confidence score either weight predictions different classifiers average weighting confidence voting novelty detection task determine whether test point belongs known class label another problem becomes relevant ever increasing availability large datasets see reviews markou singh pimentel recent work vinokurov weinshall task also highly relevant real world applications classifier usually exposed many samples belong known class note novelty detection quite different learning classes examples zero shot learning palatucci new confidence score propose next new confidence score discuss used boost classification performance ensemble methods dealing novelty detection new confidence score neural network classifiers confidence score based estimation local density induced network points represented using effective embedding created trained network one upstream layers local density point estimated based euclidean distance embedded space point nearest neighbors training set specifically let denote embedding defined trained neural network classifier let xjtrain denote set neighbors training set based euclidean distance embedded space let denote corresponding class labels points probability space constructed customary assuming likelihood two points belong class proportional exponential negative euclidean distance accordance local probability point belongs class proportional probability belongs class subset points belong class based local probability confidence score assignment point class defined follows xjtrain xjtrain score monotonically related local density similarly labeled train points neighborhood henceforth referred distance note intuitively might beneficial add scaling factor distance mean distance found deteriorating effect line related work salakhutdinov hinton two ways achieve effective embedding mentioned section order achieve effective embedding helps modify training procedure neural network classifier simplest modification augments network loss function training additional term resulting loss function linear combination two terms one classification denoted lclass another pairwise loss embedding denoted ldist defined follows lclass ldist ldist ldist defined max desirable embedding also achieved adversarial training using fast gradient method suggested goodfellow method given input target neural network parameters adversarial examples generated using sign lclass step adversarial example generated point batch current parameters network classification loss minimized regular adversarial examples although originally designed improve robustness method related measures density count correct neighbors inverse distance behave similarly perform comparably confidence score neural network classifiers seems improve network embedding purpose density estimation possibly along way increases distance pairs adjacent points different labels implementation details ldist defined pairs points denoted training minibatch set sampled replacement training points minibatch half many pairs size minibatch experiments lclass regular cross entropy loss note also tried loss functions limit distance points class exactly hoffer ailon tadmor however functions produced worse results especially dataset many classes finally note tried using loss adversarial training together training network also produced worse results alternative confidence scores given trained network two measure usually used evaluate classification confidence max margin maximal activation normalization output layer network entropy negative entropy activations output layer network noted empirical study zaragoza buc showed two measures typically good existing method evaluation classification confidence two recent methods shown improve reliability confidence score based entropy mcdropout gal ghahramani adversarial training lakshminarayanan goodfellow terms computational cost adversarial training increase sometimes double training time due computation additional gradients addition adversarial examples training set hand change training time increases test time orders magnitude typically methods complementary approach focus modifications actual computation network either train test time done evaluate confidence using entropy score show experiments adversarial training combined proposed confidence score improves final results significantly method computational analysis unlike two methods described adversarial training confidence score takes existing network computes new confidence score network embedding output activation use network without adversarial training dropout loss function network suitably augmented see discussion empirical results section show score always improves results entropy score given network train test computational complexity considering loss tadmor showed computing distances training neural networks negligible effect training time alternatively using adversarial training additional computational cost incurred mentioned hand fewer hyper parameters left tuning test time method requires carrying embeddings training data also computation nearest neighbors sample nearest neighbor classification studied extensively past years consequently many methods perform either precise approximate reduced time space complexity see gunadi recent empirical comparison main methods experiments using either condensed nearest neighbours hart density preserving sampling budka gabrys able reduce memory requirements train set original size without affecting performance point additional storage required nearest neighbor step much smaller size networks used classification increase space complexity became insignificant regards time complexity recent studies shown modern gpu used speed nearest neighbor computation orders magnitude garcia arefin also showed approximation recall accomplished times faster compared precise combining reductions space time note even large dataset including example images embedded dimensional space computation complexity nearest neighbors test sample requires operations comparable even much faster single forward run test sample modern relatively small resnets parameters thus method scales amit mandelbaum daphna weinshall well even large datasets ensembles classifiers many ways define ensembles classifiers different ways put together focus ensembles obtained using different training parameters single training method specifically means train several neural networks using random initialization network parameters along random shuffling train points henceforth regular networks refer networks trained classification regular loss distance networks refer networks trained loss function defined networks refer networks trained adversarial examples defined ensemble methods differ weigh predictions different classifiers ensemble number options common use see recent review accordance used comparison experimental evaluation section softmax average simple voting weighted softmax average softmax vector multiplied related prediction confidence score confidence voting confident network gets votes dictator voting decision confident network prevails evaluate methods weights defined either entropy score distance score defined novelty detection novelty detection seeks identify points test set belong classes present train set evaluate performance task train network known benchmark dataset augmenting test set test points another dataset includes different classes confidence score used differentiate known unknown samples binary classification task therefore evaluate performance using roc curves experimental evaluation version svhn netzer cases commonly done data preprocessed using global contrast normalization zca whitening method data augmentation used svhn svhn also use additional labeled hand cropping flipping used check robustness method heavy data augmentation experiments networks used elu clevert activation used network suggested clevert following architecture denotes convolution layer kernels size stride denotes layer window size stride denotes fully connected layer output units last layer replaced training applied dropout srivastava max pooling layer excluding first last convolution corresponding drop probabilities svhn dataset used following architecture networks trained distance loss batch randomly picked pairs points least batch included pairs points class margin set cases parameter set rest training parameters found supplementary material distance score observed number nearest neighbors could set maximum value number samples class train data also observed smaller numbers even often worked section empirically evaluate benefits proposed approach comparing performance new confidence score alternative existing scores different tasks described experimental settings evaluation used data sets krizhevsky hinton coates note reported results denoted datasets often involve heavy augmentation study order able exhaustive comparisons described opted scenario flexible yet informative enough purpose comparison different methods therefore numerical results compared empirical studies used similar settings specifically selected commonly used architectures achieve good performance close results modern resnets yet flexible enough extensive evaluations confidence score neural network classifiers table auc results correct classification conf score margin entropy distance classifier acccuracy reg dist mcd classifier acccuracy reg dist mcd svhn accuracy reg dist table legend leftmost column margin entropy denote commonly used confidence scores described section distance denotes proposed method described section second line reg denotes networks trained entropy loss dist denotes networks trained distance loss defined denotes networks trained adversarial training defined mcd denotes applied networks normally trained entropy loss since network trained svhn trained without dropout mcd applicable table auc results correct classification ensemble networks confidence score max margin entropy distance distance reg dist reg dist svhn reg dist table legend notations similar described legend table one distinction distance denotes regular architecture distance score computed independently network pair using embedding distance denotes hybrid architecture one network pair fixed distance network embedding used compute distance score prediction second network pair well general results reported sensitive specific values listed observed minor changes changing values margin proposed gal ghahramani used dropout following manner trained network usual computed predictions using dropout test repeated times test example average activation delivered output adversarial training used following goodfellow fixing experiments error prediction labels first compare performance confidence score binary task evaluating whether network predicted classification label correct results independent actual accuracy note accuracy comparable achieved resnets using augmentation using regular training data svhn see huang example performance binary task evaluated using roc curves computed separately confidence score results three datasets seen table cases proposed distance score computed suitably trained network achieves significant improvement alternative scores even enhanced using either adversarial training test distance score evaluate performance ensemble two networks results shown table distance score achieves significant improvement methods also note difference distance score computed distance networks entropy score computed adversarially trained networks much higher compared difference using one network show section adversarial training typically leads decreased performance using ensemble networks relying entropy score probably due decrease variance among classifiers observation supports added value proposed confidence score final note also used hybrid architecture using matched pair one classification network kind second distance network embedding defined distance network used compute distance score predictions first classification network surprisingly method achieves amit mandelbaum daphna weinshall figure accuracy using ensemble networks top left top right svnh bottom denotes number networks ensemble absolute accuracy marked left shown successful ensemble methods among methods evaluated blue yellow solid lines see text methods use distance score including best performing method set red dotted line denoted baseline differences accuracy two top performers top baseline method shown using bar plot marked right standard deviation difference least repetitions best results svhn comparable best result method used later section improve accuracy running ensemble networks investigation phenomenon lies beyond scope current study ensemble methods order evaluate improvement performance using confidence score direct integration classifiers ensemble used common ways define integration procedure ways construct ensemble comparisons number networks ensemble remained fixed experiments included following ensemble compositions regular networks distance networks adversarially trained networks networks networks belong one kind networks regular distance remaining networks belong another kind spanning combinations described section predictions classifiers ensemble integrated using different criteria general found methods use distance score including methods used confidence score prediction weighting performed less well simple average softmax activation method section otherwise best performance obtained using weighted average method section weights defined distance score variants also checked two options obtaining distance score network defined confidence score light advantage demonstrated hybrid networks shown section pair networks different kinds distance score computed using embedding one networks pair mcdropout used section due high computational cost experiments included variants weighting options cases shown following description results order improve confidence score neural network classifiers readability combination achieving best performance combination achieving best performance using adversarial training entails additional computational load train time ensemble variant achieving best performance without using distance score baseline ensemble average using adversarial training without distance score additional results conditions tested found supplementary material gain better statistical significance experiment repeated least times overlap networks fig shows ensemble accuracy methods mentioned using datasets clearly seen weighting predictions based distance score improves results significantly best results achieved combining distance networks adversarial networks significant improvement ensemble one kind networks shown graph still note importantly distance score used weight kind networks since adversarial training always applicable due computational cost train time show combination distance networks regular networks also lead significant improvement performance using distance score hybrid architecture described section finally note adversarial networks alone achieve poor results using original ensemble average demonstrating value distance score improving performance ensemble adversarial networks alone svhn results dataset also shown significant datasets partly due high initial accuracy still consistent demonstrating power robustness distance score novelty detection finally compare performance different confidence scores task novelty detection task confidence score used decide another binary classification problem test example belong set classes networks trained rather unknown class performance binary classification task evaluated using corresponding roc curve confidence score used two contrived datasets evaluate performance task following experimental construction suggested lakshminarayanan first experiment trained network dataset tested svhn test sets second experiment switched tween datasets changed trained network making svhn known dataset novel one task requires discriminate known novel datasets comparison computed novelty one often svm classifier using embeddings novelty thus computed showed much poorer performance possibly dataset involves many classes one class svm typically used single class therefore results included table auc results novelty detection confide score max margin entropy distance reg dist reg dist table legend left known svhn novel right svhn known novel results shown table adversarial training designed handle sort challenge surprisingly best performer nevertheless see proposed confidence score improves results even demonstrating added value conclusions proposed new confidence score neural network classifiers method proposed compute score scalable simple implement fit kind neural network method different commonly used methods based measuring point density effective embedding space network thus providing coherent statistical measure distribution network predictions also showed suitable embeddings achieved using either loss somewhat unexpectedly adversarial training demonstrated superiority new score number tasks tasks evaluated using number different datasets network architectures tasks proposed method achieved best results compared traditional confidence scores references arefin ahmed shamsul riveros carlos berretta regina moscato pablo amit mandelbaum daphna weinshall ware tool fast scalable computation using gpus plos one bauer eric kohavi ron empirical comparison voting classification algorithms bagging boosting variants machine learning budka marcin gabrys bogdan densitypreserving sampling robust efficient alternative error estimation ieee transactions neural networks learning systems clevert unterthiner thomas hochreiter sepp fast accurate deep network learning exponential linear units elus arxiv preprint coates adam lee honglak andrew analysis networks unsupervised feature learning ann arbor deng dong socher imagenet hierarchical image database dietterich thomas ensemble methods machine learning international workshop multiple classifier systems springer donahue jeffrey anne hendricks lisa guadarrama sergio rohrbach marcus venugopalan subhashini saenko kate darrell trevor recurrent convolutional networks visual recognition description proceedings ieee conference computer vision pattern recognition gal yarin ghahramani zoubin dropout bayesian approximation representing model uncertainty deep learning arxiv preprint garcia vincent debreuve eric barlaud michel fast nearest neighbor search using gpu computer vision pattern recognition workshops cvprw ieee computer society conference ieee goodfellow ian shlens jonathon szegedy christian explaining harnessing adversarial examples arxiv preprint gunadi hendra comparing nearest neighbor algorithms space hadsell raia chopra sumit lecun yann dimensionality reduction learning invariant mapping computer vision pattern recognition ieee computer society conference volume ieee hart peter condensed nearest neighbor rule ieee transactions information theory kaiming zhang xiangyu ren shaoqing sun jian deep residual learning image recognition proceedings ieee conference computer vision pattern recognition hoffer elad ailon nir deep metric learning using triplet network international workshop pattern recognition springer hexiang zhou deng zhiwei liao zicheng mori greg learning structured inference neural networks label relations proceedings ieee conference computer vision pattern recognition huang gao sun liu zhuang sedra daniel weinberger kilian deep networks stochastic depth european conference computer vision springer ville teemu tasoulis sotiris elias tuomainen risto wang liang corander jukka roos teemu fast search arxiv preprint krizhevsky alex hinton geoffrey learning multiple layers features tiny images lakshminarayanan balaji pritzel alexander blundell charles simple scalable predictive uncertainty estimation using deep ensembles arxiv preprint hui wang xuesong ding shifei research development neural network ensembles survey artificial intelligence review mackay david bayesian methods adaptive models phd thesis california institute technology markou markos singh sameer novelty detection neural network based approaches miyato takeru dai andrew goodfellow ian adversarial training methods text classification arxiv preprint neal radford bayesian learning neural networks volume springer science business media netzer yuval wang tao coates adam bissacco alessandro andrew reading digits natural images unsupervised feature learning nips workshop deep learning unsupervised feature learning volume confidence score neural network classifiers palatucci mark pomerleau dean hinton geoffrey mitchell tom learning semantic output codes advances neural information processing systems pimentel marco clifton david clifton lei tarassenko lionel review novelty detection signal processing salakhutdinov ruslan hinton geoffrey learning nonlinear embedding preserving class neighbourhood structure aistats volume schroff florian kalenichenko dmitry philbin james facenet unified embedding face recognition clustering proceedings ieee conference computer vision pattern recognition sharif razavian ali azizpour hossein sullivan josephine carlsson stefan cnn features astounding baseline recognition proceedings ieee conference computer vision pattern recognition workshops srivastava nitish hinton geoffrey krizhevsky alex sutskever ilya salakhutdinov ruslan dropout simple way prevent neural networks overfitting journal machine learning research szegedy christian zaremba wojciech sutskever ilya bruna joan erhan dumitru goodfellow ian fergus rob intriguing properties neural networks arxiv preprint tadmor oren rosenwein tal shai wexler yonatan shashua amnon learning metric embedding face recognition using multibatch method advances neural information processing systems tibshirani robert comparison error estimates neural network models neural computation vinokurov nomi weinshall daphna novelty detection multiclass scenarios incomplete set class labels arxiv preprint weston jason ratle mobahi hossein collobert ronan deep learning via embedding neural networks tricks trade springer zaragoza hugo buc florence confidence measures neural network classifiers proceedings seventh int conf information processing management uncertainty knowlegde based systems | 2 |
may discriminating power tests resource january abstract since discovery differential linear logic dll inspired numerous domains denotational semantics categorical models dll commune simplest one rel category sets relations proof theory naturally gave birth differential proof nets full complete dll turn tools naturally translated intuitionistic counterpart taking category associated comonad rel becomes mrel model contains notion differentiation proof nets used naturally extend lambda calculus resources calculus contains notions linearity differentiations course mrel model resources proved adequate fully abstract strong conjecture bucciarelli carraro ehrhard manzonetto however paper exhibit moreover give intuition essence look generality use extension resource also introduced bucciarelli fully abstract tests introduction first extension resources boudol introducing special resource sensitive application may involve multisets affine arguments one used one time natural way export resource sensitiveness functional setting however gathering known interesting properties confluence linearity fully explored later ehrhard regnier working functional interpretation differential proof nets discovered calculus similar boudol one named differential adding derivative operation syntactically corresponds linear substitution recovers done translation linear arguments non linear one semantical view even allow generalisation operation recover excellent semantical properties confluence taylor expansion adopt syntax improvements differential boudol calculus call category rel set relations known model linear logic despite high degree degeneration relop rel natural construction indeed appeared degeneration reality natural choice preserves proofs interpretation function proof mrel injective isomorphism principal interest category models differential linear logic known category simplest natural every categorical model linear logic interpretation induced comonad comonad construct category case rel new category mrel corresponds category sets morphisms relations finite multisets model construction natural mrel priori one natural models even non well pointed natural question depth link among reflexive elements mrel precisely among mrel canonical reflexive element knew adequate two terms carrying interpretations mrel behave way contexts know anything counterpart named full abstraction question thoroughly studied however since proved resp fully abstract principal namely usual kfoury linear calculus also extension tests denoted therefore bucciarelli emit strong conjecture full abstraction however purpose counter example found order exhibit counter example take unusual shortcut using full abstraction result indeed prove slightly general theorem failure full abstraction model fully abstract due generalization introduce full description core article available annexes additionally considerably easier intuitive direct usual method way proceeding part larger study full abstraction indeed looking mechanical way tackle full abstraction problems two steps first extend calculus well chosen semantical objects order reach definability compact elements study full abstraction question indirectly via link operational equivalence original calculus artificial extension reduces mix semantic syntactic question purely syntactic one allowing use powerful syntactic constructions tests introduced full abstraction theorem boudol resources later principle improved implementing semantic objects syntax order get full abstraction following idea extension context compared basic exception mechanism term raising exception test absorbing non applications exception catching exception annihilating important scope act infinite application notations used specified integer denotes identity background explained article directly following reason need introduce tests notion linearity capital term linear position never suffer duplication erasing regardless reduction strategy linear subterms subterms either first subterm lambda abstraction linear position left side application linear position linear part right side last case real improvement asks arguments separated linear non linear arguments therefore right side applications replaced new kind expression different terms bags bags multisets containing linear non banged arguments exactly one non linear banged argument terms bags syntax modulo macro convenience finite sums denoted different neutral elements different sums demonic sum implemented since want calculus resource sensitive confluent thus choice considere sum possible outcomes sums distribute linear context minn minn application linear argument replace one one occurrence variable thus need two kinds substitutions usual one denoted linear one denoted last act like derivation enables describe words differential proof nets tensor par added freely sense still natural interpretation mrel operations translated calculus exception mechanism one side raises exception test burning applicative context whenever applications linear component otherwise diverges side catch exceptions burning abstraction context whenever abstraction dummy main difference usual exception system divergence catch exception raised introduce new operators new kind expression play role exception tests terms test new operator immediately imply new distribution rules sum linear substitution corresponding operational semantics intuition operator take test boolean value compute returns infinite occurrence abstracted variables test taking term returns successful test term converging context consists infinite empty application observational order full abstraction order ask full abstraction one specify reduction strategy natural choice would head reduction would make normal form applicative instantiation allow convergence term therefore reduction strategy considering headreduction reduction reduce subterms linear position subterms head positions corresponding normal forms terms tests form every must normal forms sum kind definition observationally context cln whenever clm observationally equivalent moreover observationally particular case easily restrict contexts contexts whose output tests applied systematically simplification denote observational order equivalence bucciarelli carraro ehrhard manzonetto able prove strong theorem relating model calculus theorem fully abstract resources tests closed terms resources ans tests order exhibit use following property fact let calculus let model fully abstract fully abstract iff operational equivalences equal domain intersection context means order prove non full abstraction sufficient find two terms separated context separated context makes research proof quite easier terms involved complex context firstly exhibiting term observationally identity observational order turing fix point combinator term seems quite complex modulo reduces exactly elements following sum thus think equivalent due following property lemma proof simple reduction unfolding absence tests term comportment similar sense converges applicative context provided applications carry linear components particular converges often identity lemma context clim converges clam converges proof let context converge context lemma since neither free variables assume thus lemma clam clm converges presence real tests comportment appeared different sense diverges particular observationally identity lemma diverges converges proof diverges since non convergence comes hypothesis first term trivial second hence broken conjecture concerning equality observational denotational orders let break whole conjecture theorem fully abstract resources works first diligent reader remark critical use demonic sum powerful calculus even diligent one remark arbitrary choice made concerning sum could differentiate terms reduced terms remove sums original syntax appear reductions terms choice made corresponds one carries understandable claim another equivalent arises case limited sum however little complicated make necessary rework material even everything works exactly way translated related cases particular prove non full abstraction scott angelic demonic sums conjectured calculus extension tests exists fully abstract trivial modification tests using general demonic angelic sums framework term plays exactly role example output end unique object dll exhibit two natural constructions one semantical world syntactical one appeared respect full abstraction one would say natural natural one may found would easy state art known natural construction misunderstanding comes concept naturality seems syntactic idea convergence really correspond equivalent semantical word one lowest fix point second largest one difference appears working demonic sum allow check convergence unbounded applicative context finally presented tests general tool whose importance role gave result interesting important presents tests useful tools verify full abstraction fails remains negative result justify alone real interest works focus presenting positive proofs full abstractions using tests following way already submitted revisited proof full abstraction scott usual references gerard boudol multiplicities inria research report boudol curien lavatelli semantics lambda calculi resources mathematical structures comput sci mscs vol flavien breuvart new proof full abstraction theorem submited antonio bucciarelli alberto carraro thomas ehrhard giulio manzonetto full abstraction resource calculus tests marc bezem editor computer science logic csl international annual conference eacsl leibniz international proceedings informatics lipics schloss fuer informatik dagstuhl germany antonio bucciarelli alberto carraro thomas ehrhard giulio manzonetto full abstraction resource lambda calculus tests throught taylor expansion accepted antonio bucciarelli thomas ehrhard giulio manzonetto enough points enough jacques duparc thomas henzinger editors csl proceedings computer science logic lecture notes computer science springer antonio bucciarelli thomas ehrhard giulio manzonetto relational model parallel sergei anil nerode editors logical foundations computer science international symposium lfcs lecture notes computer science daniel carvalho lorenzo tortora falco relational model injective multiplicative exponential linear logic without weakenings corr ehrhard laurent interpreting finitary differential interaction nets inf comput thomas ehrhard laurent regnier differential lambdacalculus theoretical computer science elsevier kfoury linearization consequences log comput available http giulio manzonetto general class models mathematical foundations computer science mfcs lecture notes computer science springer pagani tranquilli parallel reduction resource lambdacalculus aplas lncs michele pagani simona ronchi della rocca linearity nondeterminism solvability fundamenta informaticae available http model categorical model category rel sets relations known model linear logic seely category giving interpretation let comutative diagrams reader since comutations trivial like monoidal tensor functor given arbitrary unit symetric monoidal close take evaluation rel star autonomus dualising object trvial duality give interpretation multiplicatives category cartesian catesian product projections product morphisms terminal object give interpretation additives add comonade functor define deriliction digging give interpretation exponentials seely category model linear logic since isomorphismes trivial defined even seen categorical model differential linear logic defining natural transforamtion fixing contraction weakening define derivative derivatives taylor two morphisms finally exponential acept since ida ida isomorphism detail models dll see every categorical model linear logic exponential comonade induced cokleisly rel rel whose objects set whose morphisms relations identities relations digp composition algebraic model order algebraic model need reflexive object triplet app abs object mrel app aps app abs object priory found taking lower fix point leads trivial empty model resolve complicated fix point exponent represent infinit tensor product lower fix point called way see fixpoint say equal set quazi everywhere empty lists finite substets element recursively defined either list empty elements coresponding app abs arise imediatly functoriality app abs order understandable presenting interpretation terms via type system types living usual presentation interpretation recoverd type system type system following | 6 |
dec pliability whitney extension theorem curves carnot groups nicolas juillet mario sigalotti abstract whitney extension theorem classical result analysis giving necessary sufficient condition function defined closed set extendable whole space given class regularity adapted several settings among one carnot groups however target space generally assumed equal focus extendability problem general ordered pairs analyze particular case characterize groups whitney extension property holds terms newly introduced notion call pliability pliability happens related rigidity defined bryant hsu exploit relation order provide examples carnot groups carnot groups whitney extension property hold use geometric control theory results accessibility control affine systems order test pliability carnot group particular recover recent results donne speight zimmermann lusin approximation carnot groups step whitney extension heisenberg groups extend results pliable carnot groups show latter may arbitrarily large step introduction extending functions basic fundamental tool analysis fundamental particular extension theorem established whitney guarantees existence extension function defined closed set vector space function class provided minimal obstruction imposed taylor series satisfied whitney extension theorem plays significative part study ideals differentiable functions see variants still active research topic classical analysis see instance analysis carnot groups homogeneous distance like distance presented folland stein monograph nowadays classical topic carnot groups provide generalization vector spaces close original model radically different carnot groups provide wonderful field investigation many branches mathematics setting elegant rich natural crossroad different fields mathematics instance analysis pdes geometric control theory see instance contemporary account mathematics subject classification key words phrases whitney extension theorem carnot group rigid curve horizontal curve nicolas juillet mario sigalotti therefore natural recast whitney extension theorem context carnot groups far know first generalization whitney extension theorem carnot groups found giorgi result sets finite perimeter adapted first heisenberg group carnot group step generalization used authors stress difference intrinsic regular hypersurfaces classical hypersurfaces heisenberg group recent paper gives final statement whitney extension theorem functions carnot groups natural generalization one imagine holds full strength details see section study whitney extension property carnot groups however closed following suggestion serra cassano one might consider maps carnot groups instead solely functions carnot groups new question presents richer geometrical features echoes classical topics metric geometry think particular classification lipschitz embeddings metric spaces related question extension lipschitz maps metric spaces refer corresponding results usual carnot groups abelian groups heisenberg groups topological dimension view theorem lipschitz maps see theorem directly related whitney extension problem one horizontal maps class defined carnot groups framework paper simple pieces argument show whitney extension theorem generalize every ordered pair carnot groups basic facts contact geometry suggest extension hold maps actually known local algebraic constraints first order make maximal dimension legendrian submanifold contact manifold dimension fact derivative differentiable map range kernel contact form range map dimension map horizontal derivatives derivatives take value kernel canonical contact form particular defined nowhere maximal rank moreover consequence theorem lipschitz map derivable almost every point horizontal derivatives maximal rank order contradict extendability lipschitz maps enough define function subset whose topological constraints force possible extension maximal rank point let sketch concrete example provides constraint lipschitz extension problem known isometrically embedded exponential map euclidean distances one also consider two parallel copies mapped parallel images second obtained first vertical translation aiming contradiction suppose exists extending lipschitz map provides lipschitz homotopy using definition lipschitz map topology topological dimension range least measure positive possible dimensional constraints explained see rigorous proof using different set domain function extended proof formulated terms index theory whitney theorem curves carnot groups purely latter property means measure range lipschitz map zero probably construction ideas works lipschitz extension problem adapted whitney extension problem really concern present article list similarities two problems rather exhibit class ordered pairs carnot groups validity whitney extension problem depends geometry groups note different type counterexample whitney extension theorem involving groups neither euclidean spaces heisenberg groups obtained khozhevnikov described example work motivated serra cassano suggestion paris lecture notes institut henri proposes choose general carnot groups target space look curves maps horizontal derivatives see problem different lipschitz extension problem whitney extension problem indeed problems solved every answer extendibility question asked serra cassano depends choice precisely provide geometric characterization extension problem always solved say case pair extension property examples target carnot groups extendibility possible identified zimmerman proved every pair extension property main component characterization carnot groups extension property notion pliable horizontal vector horizontal vector identified vector field pliable every every neighborhood horizontal layer support curves derivative starting direction form neighborhood integral curve starting details see definition proposition notion close equivalent property integral curves rigid sense introduced bryant hsu illustrate example example say carnot group pliable horizontal vectors pliable since rigid integral curve horizontal vector pliable hard show exist carnot groups dimension larger step larger see example hand give criteria ensuring pliability carnot group notably fact step theorem also prove existence pliable groups positive step proposition main theorem following theorem pair extension property pliable paper organized follows section recall basic facts carnot groups present condition light theorem section introduce notion pliability discuss relation rigidity show pliability necessary extension property hold theorem proof result goes assuming horizontal vector nicolas juillet mario sigalotti exists using provide explicit construction map defined closed subset extended section devoted proving pliability also sufficient condition theorem section use result extend theorem proved recently speight heisenberg groups see also alternative proof precisely proved absolutely continuous curve group step coincides set arbitrarily small complement curve show case pliable carnot groups proposition finally section give criteria testing pliability carnot group first show zero horizontal vector always pliable proposition applying results control theory providing criteria endpoint mapping open show pliable step equal whitney condition carnot groups nilpotent lie group said carnot group stratified sense lie algebra admits direct sum decomposition called stratification every recall denotes linear space spanned subspace called horizontal layer also denoted say step group product two elements denoted given write adx operator defined adx lie algebra identified family vector fields exponential application maps vector time integral curve vector field starting identity denoted exp also denote etx flow vector field time notice etx exp integral curves vector fields said straight curves lie group diffeomorphic dim usual way identify global system coordinates exp group structure expressed formula way exp becomes mapping onto simply identity introduce dilation uniquely characterized using decomposition holds also define dilation exp whitney theorem curves carnot groups given absolutely continuous curve velocity exists almost every identified element whose associated vector field evaluated equal absolutely continuous curve said horizontal almost every interval denote space curves every assume horizontal layer algebra endowed quadratic norm kgh distance two points defined minimal length horizontal curve connecting inf kgh horizontal note known provides topology usual one moreover homogeneous observe distance depends norm kgh considered however distances fact metrically equivalent even equivalent homogeneous distance similar way norms vector space equivalent notice seen value function optimal control problem min kgh basis finally space horizontal curves class endowed natural metric associated kgh follows distance two curves max sup sup kgh following write denote quantity kgh whitney condition homogeneous homomorphism two carnot groups group morphism moreover homogeneous homomorphism homogeneous lie algebra morphism particular linear map identified first layer nicolas juillet mario sigalotti mapped first layer homogeneous homomorphism form proposition theorem let locally lipschitz map open subset almost every exists homogeneous homomorphism tends uniformly every compact set goes zero note proposition map uniquely determined called pansu derivative denoted dfp denote space functions holds every point dfp continuous usual topology coincides definition given earlier following proposition taylor expansion let carnot groups let compact exists function dfp dfp pansu derivative proof direct consequence mean value inequality magnani contained theorem proposition hints suitable formulation condition carnot groups generalization already appeared literature paper vodop yanov pupyshev definition condition let compact subset consider map associates homogeneous group homomorphism say condition holds continuous exists function let closed set continuous say condition holds compact set holds restriction course according proposition restriction closed satisfies condition whitney theorem curves carnot groups paper focus case condition compact set reads exp sup every one exp every slight abuse terminology say condition holds classical setting whitney condition equivalent existence map respectively restrictions property usually known extension theorem simply whitney extension theorem instance even though original theorem whitney general particular includes higher order extensions considers extension linear operator theorem broad use analysis still subject dedicated research see instance references therein definition say pair extension property every satisfying condition closed set exists extends every state theorem franchi serapioni serra cassano proved theorem generalised vodop yanov pupyshev form closer original whitney result including higher order extensions linearity operator theorem franchi serapioni serra cassano carnot group pair extension property proof proposed franchi serapioni serra cassano established carnot groups step two identical general carnot groups inspired proof corresponds special case let mention example literature remarkable fact explained kozhevnikov example ultrarigid carnot groups dimension respectively presented analysed lemma one construct example satisfying condition compact without possible sion one exploits rarity maps maximal rank ultrarigid carnot groups definition ultrarigid definition quasimorphisms carnot similitudes composition dilations use directly definition ultrarigid groups result stated lemma concretely let set nicolas juillet mario sigalotti let map constantly equal constant projection lemma applied point implies possible extension projection map vanishes contain remains prove whitney condition holds fact two points look distance one side side first one multiplicative constant goes zero second one constant proves condition present paper provide examples ordered pairs extension property hold depending geometry address problem whitney extensions orders larger preliminary step considering extensions would provide suitable taylor expansion spirit recalled proposition extension property holds let conclude section assuming ordered pair carnot groups showing deduce pairs describe three possible implications let homogeneous subgroup admits complementary group sense section homogeneous lie groups intersection reduced assume moreover carnot group normal one define canonically projection homogeneous homomorphism moreover lipschitz continuous see proposition rest section say appropriate carnot subgroup easily proved extension property particular according example every dim vector space appropriate carnot subgroup therefore extension property assume appropriate carnot subgroup using lipschitz continuity projection one easily deduces definition condition extension property finally assume extension property carnot group one checks without difficulty true consequence theorem use three implications infer pliability statements namely carnot group pliable extension property carnot group positive dimension appropriate carnot subgroup pliable carnot group iii product two pliable carnot groups rigidity necessary condition extension property let first adapt case horizontal curves carnot groups notion rigid curve introduced bryant hsu show following existence rigid whitney theorem curves carnot groups curves carnot group used identify obstructions validity extension property definition bryant hsu let say rigid exists neighborhood space reparametrization vector said rigid curve exp rigid celebrated existence result rigid curves general manifolds obtained bryant hsu improved examples carnot groups rigid curves illustrated extended shown exists carnot group topological dimension rigid curves nevertheless curves need straight actually construction proposed produces curves necessarily straight following see also focusing rigid straight curves carnot groups formulate theorem order state let canonical projection recall curve said abnormal path horizontal curve every moreover every almost every theorem let assume abnormal path exp exp rigid every every moreover denoting quadratic form defined every conversely every every every exp rigid example example carnot structure rigid straight curves standard engel structure case dim dim dim one pick two generators horizontal distribution whose nontrivial bracket relations span respectively let illustrate existence rigid straight curves deduced theorem one could also prove rigidity direct computations type example one immediately checks abnormal path exp rigidity exp follows theorem thanks relation extension previous construction used exhibit every carnot group topological dimension step straight rigid curves suffices nicolas juillet mario sigalotti consider carnot group goursat distribution group dim dim exist two generators whose nontrivial bracket relations span following definition introduces notion pliable horizontal curve contrast rigid one definition say curve pliable every neighborhood set neighborhood vector said pliable curve exp pliable say pliable every vector pliable metric equivalence distances follows pliability horizontal vector depend norm kgh considered notice definition pliability every neighborhood pliable curve exists curve shows pliable curves rigid noticed however converse true general discussed example example show exist horizontal straight curves neither rigid pliable example consider carnot algebra step spaned basis except permutations brackets different ones zero according chapter group structure coordinates isomorphic corresponding carnot group vectors leftinvariant vector fields consider straight curve exp first notice pliable since horizontal curves small enough neighbourhood component derivative along positive implies coordinate nondecreasing endpoint horizontal curve starting belonging small enough neighbourhood negative component let show rigid either consider solution notice component identically equal zero consequence true components components respectively order disprove rigidity sufficient take nontrivial continuous whitney theorem curves carnot groups let list useful manipulations transform horizontal curves horizontal curves let horizontal curve defined every curve horizontal velocity time every curve horizontal velocity time curve defined horizontal starts finishes velocity time one composes commuting transformations one obtains curve derivative time possible define concatenation two curves starting follows concatenated curve satisfies velocity velocity consequence invariance lie algebra consequence rigid rigid every similarly pliable pliable every proposition gives characterization pliable horizontal vectors terms condition apriori easier check one appearing definition proving proposition let give technical lemma write denote ball center radius distance similarly bgh denote ball center radius norm kgh lemma exists satisfy proof assume contradiction every exist equivalently however lim leading contradiction proposition vector pliable every neighborhood curve exp space set neighborhood exp nicolas juillet mario sigalotti proof let denote canonical projection one direction equivalence trivial let take assume neighbourhood exp prove neighborhood exp step intermediate step first prove exists exp contained let real parameter using transformations among horizontal curves described earlier section let define map associating curve concatenation transformation obtained transformation curve defined follows consider curve defined see transformation derivative time derivative time hence continuous derivative limit times map moreover derivative times derivative time set derivatives particular notice construction endpoint curve function actually equal see let exp exp curves derivative constantly equal prove close enough differential invertible let use coordinate identification every limits tends respectively converge respectively one check see proposition inverse function derivative finally left right translations global diffeomorphisms collecting informations applying chain rule get tends invertible operator goes hence great enough local diffeomorphism know assumption endpoints curves form neighborhood shown also case replace close curves moreover derivative time thus proved every exists contained step let prove neighborhood let curve consider every bgh every curve defined follows transformation whitney theorem curves carnot groups linear interpolation notice let endpoint time curve starting whose derivative linear interpolation depends curve moreover tends goes uniformly respect bgh lemma implies sufficiently close every bgh holds proved bgh concluding proof proposition main result section following theorem constitutes necessity part characterization extendability stated theorem theorem let carnot group extension property pliable proof suppose contradiction exists pliable going prove extension property let exp since pliable follows proposition exist neighborhood space sequence converging every curve satisfies particular exists neighborhood every every since assume without loss generality every max exp exp homogeneity deduce every every every every define every follows max exp exp introduce sequence defined recursively notice cauchy sequence denote limit construction every every proof nicolas juillet mario sigalotti extension property concluded show condition holds defined let exp exp prove every exists triangular inequality max exp exp notice exp exp exp exp exp exp last equality follows invariance thanks one concludes exp exp pmax hence concludes proof theorem sufficient condition extension property seen previous section differently classical case general carnot group suitable whitney condition sufficient existence extension precisely follows theorem horizontal vectors pliable exist triples condition holds next section prove converse result showing extension property holds horizontal vectors pliable pliable start introducing notion locally uniformly pliable horizontal vector whitney theorem curves carnot groups definition horizontal vector called locally uniformly pliable exists neighborhood every exists every exp bgh remark happens pliability locally uniformly pliable every locally uniformly pliable going see following remark pliability local uniform pliability equivalent properties following proposition however establishes equivalence pliability local uniform pliability horizontal vectors proposition pliable horizontal vectors locally uniformly pliable proof assume pliable every denote positive constant exp bgh going show exists every bgh bgh exp proof local uniform pliability horizontal vector concluded simple compactness arguments taking compact neighborhood using notation definition first fix way exp exp every bgh every every every curve define follows particular kgh kgh nicolas juillet mario sigalotti since depends conclude every bgh bgh exp notice max kgh kgh thanks lemma sufficiently small exp exp exp whenever bgh similarly bgh bgh exp provided kgh proof concluded taking min ready prove converse theorem concluding proof theorem theorem let pliable carnot group extension property proof proposition assume vectors locally uniformly pliable note moreover enough prove extension maps defined compact sets generalisation closed sets immediate source carnot group let satisfy condition compact define complementary open set countable disjoint union open intervals unbounded components simply define curve constant speed min max finite components proceed follows consider let smallest number contains every consider extension definition condition exists function tending whitney theorem curves carnot groups smaller kgh since equal exp conclude exp using corresponding estimates deduce exp construction extends interior prove boundary clear order conclude proof left pick let tend must show tend respectively continuous assume without loss generality assume every connected component containing either constant large case length goes zero first case simply notice construction second case assume goes zero continuous converge respectively inequality guarantees exp local uniform pliability implies goes zero follows kgh zero proving tend respectively situation infinitely many handled similarly replacing application lusin approximation absolutely continuous curve recent paper donne speight prove following result theorem proposition let carnot group step consider horizontal curve exist case equal heisenberg group result already proved theorem see also corollary speight also identifies horizontal curve engel group statement proposition satisfied theorem name lusin approximation property stated proposition comes use classical theorem lusin proof let sketch proof replaced vector space derivative absolutely continuous curve integrable function lusin theorem states coincides continuous function set measure arbitrarily close thanks inner continuity lebesgue measure one assume compact moreover nicolas juillet mario sigalotti chosen whitney condition satisfied consequence mean value inequality depends usual arguments measure theory inequality made uniform respect one slightly reduces measure classical whitney extension theorem provides defined proof also follows scheme one sketched show scheme adapted pliable carnot group fact carnot groups step pliable pliable carnot groups step proved next section theorem proposition paper actually provides nontrivial generalization proposition novelty approach respect replace classical rademacher differentiablility theorem lipschitz absolutely continuous curves adapted theorem proposition lusin approximation horizontal curve let pliable carnot group horizontal curve exist curve class curves coincide proof going prove exists compact set three following conditions satisfied exists horizontal vector every uniformly continuous every exists every holds exp conditions condition holds since pliable according theorem extension property holds yielding statement proposition case lipschitz continuous let lipschitz curve rademacher theorem proposition states exists full measure curve admits derivative holds exp goes zero let positive lusin theorem one restrict compact set uniformly continuous moreover classical arguments measure theory functions exp bounded function goes zero uniformly compact set words every exists holds exp whitney theorem curves carnot groups three conditions listed hold true case general horizontal curve let absolutely continuous admits pathlength parametrisation exists lipschitz continuous curve function absolutely continuous moreover norm almost every time absolutely continuous every exists measurable inequality implies let positive let number corresponding previous sentence applying scheme proof sketched proposition exists compact set differentiable continuous derivative bound mean value inequality uniform lipschitz curve every case provides compact set listed properties place let compact note holds exp zero uniformly respect also know exist continuous simple exercise compose two taylor expansions obtain wanted conditions note derivative continuous remark set said rectifiable exists countable family lipschitz curves usual lusin approximation curves permits one replace lipschitz classical definition rectifiability replaced pliable carnot group two definitions still make sense according proposition still equivalent rectifiability metric spaces carnot groups active research topic geometric measure theory see references conditions ensuring pliability goal section identify conditions ensuring pliable let first focus pliability zero vector proposition every carnot group vector pliable proof according proposition prove every set nicolas juillet mario sigalotti neighborhood recall exist map rank equal dim satisfies see notice every function also rank equal dim satisfies hence replacing small enough assume kvj kgh let neighborhood every notice neighborhood complete proof proposition constructing every curve every let exhibit curve kxkgh curve constructed imposing extending convex interpolation also possible reverse curve transformation connect segment curve respecting moreover kxkgh finally concatenating transformation curves type possible every connect curve max kxkgh kgh construct follows fix impose define concatenation following continuous curves first take constant equal time constant equal time finally constant equal time construction satisfies remark let show consequence previous proposition pliability local uniform pliability equivalent properties albeit know proposition pliability horizontal vectors equivalent local uniform pliability horizontal vectors recall local uniform pliability horizontal vector implies pliability horizontal vectors neighborhood definition therefore locally uniformly pliable carnot group every horizontal vector pliable remark hence whitney theorem curves carnot groups locally uniformly pliable pliable remark concluded recalling carnot groups exist see examples let carnot group let orthonormal basis let consider control system given control vary let rewrite denotes canonical basis system rewritten every let endpoint map time system initial condition notice solution initial condition corresponding control state following criterium pliability proposition map open horizontal vector pliable consequence restriction open topology considered pliable deduce following property straight pmcurve pliable admits abnormal lift indeed horizontal vector pliable differential must singular hence see instance section proposition exist exp nicolas juillet mario sigalotti follows equation implies every every moreover must different zero comparing follows abnormal path control literature proposes several criteria testing openness endpoint map type test presented taken generalizes previous criteria obtained theorem bianchini stefani corollary let manifold vector fields assume family vector fields lie bracket generating denote iterated brackets elements recall length element sum number times elements appears expression assume every element whose expression vector fields appears even number times equal every linear combination elements smaller length evaluated fix neighborhood let set controls solution defined time denote endpoint solution neighborhood following two results show apply theorem guarantee carnot group pliable hence extension property theorem let carnot group step pliable extension property proof order prove every horizontal vector going apply theorem endpoint map open zero notice moreover every lie bracket elements zero since step particular lie brackets elements vector fields appears even number times zero according theorem left prove lie bracket generating clearly true since span whitney theorem curves carnot groups equal every conclude paper showing construct pliable carnot groups arbitrarily large step proposition every exists pliable carnot group step proof fix consider free nilpotent stratified lie algebra step generated elements every denote ideal generated ideal also ideal factor algebra nilpotent inherits stratification denote carnot group generated let elements obtained projecting construction every bracket least one appears zero moreover step since different zero let apply theorem order prove every thependpoint map open zero following computations proof theorem adx particular family lie bracket generating moreover every lie bracket elements least one elements appears zero consider lie bracket elements let number times elements appears let prove induction linear combination brackets elements appears times consider case bracket type linear combination brackets elements appear easily proved induction thanks jacobi identity induction step also follows directly jacobi identity therefore conclude every lie bracket elements least one elements appears zero implies particular hypotheses theorem satisfied concluding proof pliable acknowledgment warmly thank chapoton massuyeau suggestions leading proposition also grateful artem kozhevnikov dario prandi luca rizzi andrei agrachev many stimulating discussions work initiated ihp trimester geometry analysis dynamics manifolds wish thank institut henri fondation sciences paris welcoming working conditions nicolas juillet mario sigalotti second author supported european research council erc stg gecomethods contract number grant anr fmjh program gaspard monge optimization operation research references agrachev sachkov control theory geometric viewpoint volume encyclopaedia mathematical sciences berlin control theory optimization agrachev sarychev abnormal geodesics morse index rigidity ann inst anal non balogh rectifiability lipschitz extensions heisenberg group math balogh lang pansu lipschitz extensions maps heisenberg groups ann inst fourier grenoble barilari boscain sigalotti editors dynamics geometry analysis manifolds volumes ems series lectures mathematics european mathematical society ems bianchini stefani graded approximations controllability along trajectory siam control bonfiglioli lanconelli uguzzoni stratified lie groups potential theory sublaplacians springer monographs mathematics springer berlin brudnyi shvartsman generalizations whitney extension theorem int math res bryant hsu rigidity integral curves rank distributions invent evans gariepy measure theory fine properties functions textbooks mathematics crc press boca raton revised edition fefferman sharp form whitney extension theorem ann math fefferman israel luli sobolev extension linear operators amer math folland stein hardy spaces homogeneous groups volume mathematical notes princeton university press princeton franchi serapioni serra cassano rectifiability perimeter heisenberg group math franchi serapioni serra cassano structure finite perimeter sets step carnot groups geom karidi note carnot geodesics nilpotent lie groups dynam control systems hermes control systems generate decomposable lie algebras differential equations special issue dedicated lasalle huang yang extremals classes carnot groups sci china kirchheim serra cassano rectifiability parameterization intrinsic regular surfaces heisenberg group ann norm super pisa sci kozhevnikov metric properties level sets differentiable maps carnot groups doctoral thesis paris sud paris may whitney theorem curves carnot groups donne ottazzi warhurst ultrarigid tangents nilpotent groups ann inst fourier grenoble donne speight lusin approximation horizontal curves step carnot groups calc var partial differential equations liu sussman shortest paths metrics distributions mem amer math lusin sur les des fonctions mesurables acad paris magnani towards differential calculus stratified groups aust math malgrange ideals differentiable functions tata institute fundamental research studies mathematics tata institute fundamental research bombay oxford university press london montgomery survey singular curves geometry dynam control systems rigot wenger lipschitz theorems jet space carnot groups int math res imrn serra cassano topics geometric measure theory carnot groups dynamics geometry analysis manifolds volume ems series lectures mathematics european mathematical society ems speight lusin approximation horizontal curves carnot groups appear revista matematica iberoamericana sussmann properties vector field systems altered small perturbations differential equations sussmann general theorem local controllability siam control optimal concrete mathematics vuibert paris applications theory applications vodop yanov pupyshev theorems extension functions carnot groups sibirsk mat vodop yanov pupyshev theorems extension functions carnot group dokl akad nauk wenger young lipschitz extensions jet space carnot groups math res whitney analytic extensions differentiable functions defined closed sets trans amer math whitney differentiable functions defined closed sets trans amer math zimmerman whitney extension theorem horizontal curves geom appear institut recherche umr strasbourg cnrs rue descartes strasbourg france address inria team geco cmap polytechnique cnrs palaiseau france address | 4 |
oct generalized traveling salesman problem solved ant algorithms pintea pop camelia chira north university baia mare university romania cmpintea pop petrica cchira abstract well known problem called generalized traveling salesman problem gtsp considered gtsp nodes complete undirected graph partitioned clusters objective find minimum cost tour passing exactly one node cluster exact exponential time algorithm effective algorithm problem presented proposed modified ant colony system acs algorithm called reinforcing ant colony system racs introduces new correction rules acs algorithm computational results reported many standard test problems proposed algorithm competitive already proposed heuristics gtsp solution quality computational time introduction many combinatorial optimization problems theory reduced hopes problems solved within polynomial bounded computation times nevertheless solutions sometimes easy find consequently much interest approximation heuristic algorithms find near optimal solutions within reasonable running time heuristic algorithms typically among best strategies terms efficiency solution quality problems realistic size complexity contrast individual heuristic algorithms designed solve specific problem strategic problem solving frameworks adapted solve wide variety problems algorithms widely recognized one practical approaches combinatorial optimization problems representative include genetic algorithms simulated annealing tabu search ant colony useful references regarding methods found generalized traveling salesman problem gtsp introduced gtsp several applications location telecommunication problems information problems applications found several approaches considered solving gtsp algorithm symmetric gtsp described analyzed given approach asymmetric gtsp described genetic algorithm gtsp proposed efficient composite heuristic symmetric gtsp etc aim paper provide exact algorithm gtsp well effective algorithm problem proposed modified version ant colony system acs introduced ant system heuristic algorithm inspired observation real ant colonies acs used solve hard combinatorial optimization problems including traveling salesman problem tsp definition complexity gtsp let undirected graph whose edges associated nonnegative costs assume complete graph edge two nodes add infinite cost let partition subsets called clusters denote cost edge cij generalized traveling salesman problem gtsp asks finding tour spanning subset nodes contains exactly one node cluster problem involves two related decisions choosing node subset finding minimum cost hamiltonian cycle subgraph induced cycle called hamiltonian tour gtsp called symmetric equality holds every cost function associated edges exact algorithm gtsp section present algorithm finds exact solution gtsp given sequence vkp clusters visited want find best feasible hamiltonian tour cost minimization visiting clusters according given sequence done polynomial time solving shortest path problems described construct layered network denoted layers corresponding clusters vkp addition duplicate cluster layered network contains nodes plus extra nodes arc vkl cost cij arc vkl cost cih moreover arc vkp cost cij given consider paths visits exactly two nodes cluster vkp hence gives feasible hamiltonian tour conversely every hamiltonian tour visiting clusters according sequence vkp corresponds path layered network certain node therefore best cost minimization hamiltonian tour visiting clusters given sequence found determining shortest paths property visits exactly one node cluster overall time complexity log nlogn worst case reduce time choosing cluster minimum cardinality noted procedure leads nlogn time exact algorithm gtsp obtained trying possible cluster sequences therefore established following result procedure provides exact solution gstp nlogn time number nodes number edges number clusters input graph clearly algorithm presented exponential time algorithm unless number clusters fixed ant colony system ant system proposed approach used various combinatorial optimization problems algorithms inspired observation real ant colonies ant find shortest paths food sources nest walking food sources nest vice versa ants deposit ground substance called pheromone forming pheromone trail ants smell pheromone choosing way tend choose paths marked stronger pheromone concentrations shown pheromone trail following behavior employed colony ants lead emergence shortest paths obstacle breaks path ants try get around obstacle randomly choosing either way two paths encircling obstacle different length ants pass shorter route continuous pendulum motion nest points particular time interval ant keeps marking way pheromone shorter route attracts pheromone concentrations consequently ants choose route feedback finally leads stage entire ant colony uses shortest path many variations ant colony optimization applied various classical problems ant system make use simple agents called ants iterative construct candidate solution combinatorial optimization problem ants solution construction guided pheromone trails problem dependent heuristic information individual ant constructs candidate solutions starting empty solution iterative adding solution components complete candidate solution generated point ant decide solution component add current partial solution called choice point solution construction completed ants give feedback solutions constructed depositing pheromone solution components used solution solution components part better solutions used many ants receive higher amount pheromone hence likely used ants future iterations algorithm avoid search getting stuck typically pheromone trails get reinforced pheromone trails decreased factor ant colony system acs developed improve ant system making efficient robust ant colony system works follows ants initially positioned nodes chosen according initialization rule example randomly ant builds tour repeatedly applying stochastic greedy rule state transition rule constructing tour ant also modifies amount pheromone visited edges applying local updating rule ants terminated tour amount pheromone edges modified applying global updating rule case ant system ants guided building tours heuristic information pheromone information edge high amount pheromone desirable choice pheromone updating rules designed tend give pheromone edges visited ants ants solutions guaranteed optimal respect local changes hence may improved using local search methods based observation best performance obtained using hybrid algorithms combining probabilistic solution construction colony ants local search algorithms opt etc hybrid algorithms ants seen guiding local search constructing promising initial solutions ants preferably use solution components earlier search contained good locally optimal solutions reinforcing ant colony system gtsp ant colony system gtsp introduced order enforces construction valid solution used acs new algorithm called reinforcing ant colony system racs elaborated new pheromone rule pheromone evaporation technique let denote node cluster racs algorithm gtsp works follows initially ants placed nodes graph choosing randomly clusters also random node chosen cluster iteration every ant moves new node unvisited cluster parameters controlling algorithm updated edge labeled trail intensity let represent trail intensity edge time ant decides node next move probability based distance node cost edge amount trail intensity connecting edge inverse distance node next node known visibility time unit evaporation takes place stop intensity trails increasing unbounded rate evaporation denoted value order stop ants visiting cluster tour tabu list maintained prevents ants visiting clusters previously visited ant tabu list cleared completed tour favor selection edge high pheromone value high visibility value probability function considered unvisited neighbors node ant node unvisited cluster probability function defined follows parameter used tuning relative importance edge cost selecting next node probability choosing next node current node next node chosen follows random variable uniformly distributed parameter similar temperature simulated annealing transition trail intensity updated using correction rule cost best tour ant colony system ant generate best tour allowed globally update pheromone global update rule applied edges belonging best tour correction rule inverse cost best tour order avoid stagnation used pheromone evaporation technique introduced pheromone trail upper bound pheromone trail pheromone evaporation used global pheromone update rule racs algorithm computes given time timemax solution optimal solution possible stated follows description representation computational results graphic representation reinforcing ant colony system solving gtsp show fig beginning ants nest start search food specific area assuming cluster specific food ants capable recognize choose time different cluster pheromone trails guide ants shorter path solution gtsp fig evaluate performance proposed algorithm racs compared basic acs algorithm gtsp furthermore heuristics literature nearest neighbor composite heuristic random algorithm numerical experiments compare racs heuristics used problems tsp library tsplib provides optimal objective values problems several problems euclidean distances considered exact algorithm proposed section clearly outperformed heuristics including racs running time exponential heuristics including racs polynomial time algorithms provide good solution reasonable sizes problem divide set nodes subsets used procedure proposed procedure sets number clusters identifies farthest nodes called centers assigns remaining node nearest center obviously real world problems may different cluster structures solution procedure presented paper able handle cluster structure initial value pheromone trails lnn lnn result nearest neighbor algorithm algorithm rule always next nearest location corresponding tour traverses nodes constructed order pheromone evaporation phase let denote upper bound lnn decimal values treated parameters changed necessary parameters algorithm critical ant systems figure reinforcing ant colony system racs figure graphic representation generalized traveling salesman problem gtsp solved heuristic called reinforcing ant colony system racs illustrated first picture shows ant starting nest find food going cluster returning nest ways initialized pheromone quantity several iterations performed ant nest solution visible second picture shows solution generalized traveling salesman problem gtsp represented largest pheromone trail thick lines pheromone evaporating trails gray lines currently mathematical analysis developed give optimal parameter situation acs racs algorithm values parameters chosen follows table figure compare computational results solving gtsp using acs racs algorithm computational results obtained using random algorithm mentioned columns table figure follows name test problem first digits give number clusters last ones give number nodes optimal objective value problem acs racs objective value returned acs racs genetic algorithm best solutions table bold format solutions acs racs average five successively run algorithm problem termination criteria acs racs given timemax maximal computing time set user case ten minutes table shows reinforcing ant colony system performed well finding optimal solution many cases results racs better results acs racs algorithm generalized traveling salesman problem improved appropriate values parameters used also efficient combination algorithms potentially improve results figure reinforcing ant colony system racs versus acs conclusion basic idea acs simulating behavior set agents cooperate solve optimization problem means simple communications algorithm introduced solve generalized traveling salesman problem called reinforcing ant colony system algorithm new correction rules computational results proposed racs algorithm good competitive solution quality computational time existing heuristics literature racs results improved considering better values parameters combining racs algorithm optimization algorithms disadvantages also identified refer multiple parameters used algorithm high hardware resources requirements references colorni dorigo maniezzo distributed optimization ant colonies proc conf artif life paris france elsevier publishing dorigo optimization learning natural algorithms italian thesis dipart elettronica politecnico milano italy glover kochenberger handbook metaheuristics kluwer fischetti gonzales toth algorithm symmetric generalized travelling salesman problem oper res fischetti gonzales toth generalized traveling salesman orienteering problem kluwer laporte nobert generalized traveling salesman problem sets nodes integer programming approach infor noon bean lagrangian based approach asymmetric generalized traveling salesman problem oper res pintea dumitrescu improving ant systems using local updating proceedings ieee computer society international symposium symbolic numeric algorithms scientific computing synasc bixby reinelt library travelling salesman related problem instances http renaud boctor efficient composite heuristic symmetric generalized traveling salesman problem euro snyder daskin genetic algorithm generalized traveling salesman problem informs san antonio hoos ant system local search traveling salesman problem proc int conf evol ieee press piscataway | 9 |
notes pure dataflow matrix machines oct programming matrix transformations michael bukatin steve matthews andrey radul north america llc burlington massachusetts usa bukatin department computer science university warwick coventry project fluid cambridge massachusetts usa abstract streams associated neuron outputs using matrix controlling dmm computation linear potentially quite global neuron output net contribute neuron input net dmms described literature heavily typed one normally defines finite collection allowed kinds linear streams finite collection allowed types neurons two collections called dmm signature one considers particular fixed signature one assumes address space accommodating countable number neurons type dmm determined matrix connectivity weights one normally assumes finite number weights given moment time particular dmms equipped powerful reflection facilities include signature kind streams matrices shaped fashion capable describing dmm signature designate particular neuron self working accumulator matrices shape agree recent output neuron used movement step matrix controlling calculations neuron inputs neuron outputs dataflow matrix machines generalized recurrent neural nets mechanism provided via stream matrices defining connectivity weights network question natural question play role untyped programming architecture proposed answer discipline programming one kind streams namely streams appropriately shaped matrices yields pure dataflow matrix machines networks transformers streams matrices capable defining pure dataflow matrix machine categories subject descriptors guages general terms keywords software programming dataflow continuous deformation software introduction purpose notes contribute theoretical understanding dataflow matrix machines dataflow matrix machines dmms arise context synchronous dataflow programming linear streams streams equipped operation taking linear combinations several streams new programming architecture interesting properties one properties large classes programs parametrized matrices numbers aspect dmms similar recurrent neural nets fact considered powerful generalization recurrent neural nets like recurrent neural nets dmms essentially twostroke engines movement neuron transformations compute next elements streams associated neuron outputs streams associated neuron inputs computation local neuron question generally nonlinear movement next elements streams associated neuron inputs computed pure dataflow matrix machines version one kind streams dmms seem powerful programming platform particular convenient manually write software dmms time options automatically synthesize dmms synthesizing matrices question available however dmms bit unwieldy theoretical investigation theoretical viewpoint inconvenient many kinds streams also inconvenient one needs fix signature parametrization matrices valid fixed signature question naturally arises would equivalent untyped dataflow matrix machines one principles untyped one data type enough namely type programs data expressed programs equivalent principle dmms would one kind streams streams matrices matrix shaped able define dmm would network transformers streams matrices see section details instead string rewriting number streams matrices unfolding time approach data expressed matrices numbers approach see section like data must expressed untyped one signature order constructions continuous particular making spaces programs continuous denotationally continuous domains representing meaning programs common operationally tend fall back onto discrete schemas dataflow matrix machines seeking change provide programming facilities using continuous programs continuous deformations programs level operational semantics implementation done discrete time discrete index spaces matrices computational elements potentially continuous time continuous index spaces computational elements oldest electronic continuous platform electronic analog computers analog program however discrete kind machine number sockets every pair sockets option connect via patch cord connect among dataflow architectures oriented towards handling streams continuous data one might mention labview pure data cases programs quite discrete computational platform discussed details context recurrent neural networks turing universality recurrent neural networks known least years however together many useful elegant turinguniversal computational systems recurrent neural networks constitute convenient programming platform belong class esoteric programming languages see detailed discussion interestingly enough whether recurrent neural networks understood programs discrete continuous depends one approaches representation network topology one treats network connectivity graph thinks graph discrete data structure recurrent neural networks discrete one states instead network connectivity always complete graph topology defined weights zeros recurrent neural networks continuous frequent case borderline one considers recurrent neural net defined matrix weights therefore continuous however auxiliary discrete structures matrix weights often sparse matrix dictionary nonzero weights comes play also language used describing network implementation comes play auxiliary discrete structure dataflow matrix machines belong borderline case particular use sparse matrices inevitable matrices question matrices finite number nonzero elements choosing fixed selection types neurons seems difficult moment time would like retain ability add arbitrary types neurons dmms instead selecting fixed canonical signature assume underlying language allowing describe countable collection neuron types fashion neuron types interest expressed language assume neuron types described neuron type expressions underlying language signature assume address space structured way accommodate countable number neurons type neurons see section since countable collection expressions describing neuron types overall collection neurons still countable matrix describing rules recompute neuron inputs neuron outputs still countable parametrization countable matrices numbers across dmms across dmms particular fixed signature accumulators revised notion accumulator plays key role number dmm constructions including reflection facility self standard version neuron performing identity transform vector input vector output kind one sets weight recurrent connection neuron accumulates contributions neurons connected nonzero weights step accumulator neuron effect performs operation however somewhat abuse system kinds streams consider belonging space see evidence suboptimal convention later paper first equip accumulator neuron another input collected body neuron computes sum instead performing identity transform see section details situations one multiple kinds linear streams one would often want assign different kinds although situations one would still use kind effectively considering structure paper section discuss continuous models computation aspects section juxtapose string rewriting approaches programming section discuss language indexes network matrix accommodate countable number neuron types within one signature section discuss representation constants vectors matrices section provides two examples natural split accumulator input one example comes neuron self controlling network matrix another example section involved requires revisit domain theory context linear models computation bitopological setting specifically domains allowing monotonic inference setting approximations spaces tend become embedded vector spaces connection linear models computation comes play programming string rewriting approach several approaches programming popular approach starts standard higherorder functional programming focuses integrating streambased programming standard paradigm theoretical underpinning approach string rewriting dataflow community produced purely approaches programming one approaches mentioned approach based multidimensional streams continuous models computation history continuous models computation actually quite long progress limited making pure dataflow matrix machines version field name iti concatenation name corresponding neuron input field name otj concatenation name corresponding neuron output every pair indices matrix element matrices consideration summarize approach class pure dataflow matrix machines implicitly parametrized sufficiently universal language describing types neurons taken potential interest together associated stream transformations details dmm functioning see sections approach adopt paper based notion streams programs early work mentioned connection approach argument favor approach programming linear streams presented section among recent papers exploring various aspects approach based notion streams programs one goals present paper show approach play role synchronous dataflow programming linear streams comparable role played untyped functional programming dmm address space language indices constants vectors matrices one matrix often convenient index rows columns finite strings fixed finite alphabet numbers principal difference choice discourages focusing arbitrary chosen order encourages semantically meaningful names indices explain construction promised section works implement program outlined section one needs express important linear streams streams numbers scalars streams matrix rows streams matrix columns frequently used streams vectors streams matrices indicated one key uses scalars also matrix rows columns use multiplicative masks ability use scalars multiplicative masks needs preserved scalars represented matrices example neuron takes input stream scalars input stream matrices produces output stream matrices still need able reproduce functionality scalars represented matrices shape matrix straightforward way neuron takes two input streams matrices performs multiplication hadamard product sometimes also called schur product chose hadamard product main bilinear operation matrices scalar must represented matrix elements equal neuron types define notion type neurons following outline presented section multiple kinds linear streams one kind linear streams present paper definition simplified neuron type consists integer input arity positive integer output arity transformation describing map input streams matrices output streams matrices namely associate neuron type question transformation taking inputs streams length producing outputs streams length integer time require obvious prefix condition applied streams length first elements output streams length elements produces applied prefixes length input streams typical situation elements output streams produced solely basis elements number input streams definition also allows neurons accumulate unlimited history necessary matrices admitting finite descriptions one particular feature approach longer limit matrices containing finite number elements also need least infinite matrices admitting finite descriptions means one needs convention done case incorrect operations taking scalar product two infinite vectors ones adding matrix consisting ones self seems likely technically easiest convention cases would output zeros reset network matrix zeros hand interest consider study limits sequences finitely describable matrices network might computing limit language section going use several alphabets assume following special symbols belong alphabets assume language alphabet finite strings ltt describe neuron types interest call string name neuron type defines worried uniqueness names type assume input arity type question output arity type question every integer associate field name iti every integer associate field name otj implies implies also assume alphabet one letter finite string valid simple name representing matrix rows columns matrices streams matrix rows streams matrix columns also play important roles represent element row corresponding matrix column elements equal represent element column corresponding matrix row elements equal hence rows represented matrices equal values along column columns represented matrices equal values along row given matrix row denote representation matrix given matrix column denote representation matrix given scalar denote representation matrix respecting matlab convention denote hadamard product denote hadamard product two matrices language indices following convention describes address space countable number neurons countable number neuron types interest indexes expressed strings alphabet name neuron type ltt simple name concatenation name neuron pure dataflow matrix machines version omitting infix matrix multiplication note matrix rows correspond neuron inputs matrix columns correspond neuron outputs one always think matrices rectangular square matrices transposition always needed performing standard matrix multiplication matrices standard matrix update operation generalized several natural examples proposed given row two columns constraint finite number nonzero elements matrix updated formula aij aij akj terms matrix representations gets added work matrix section matrix rows columns used subgraph selection consider subset neurons take row values positions corresponding neuron outputs subset question zeros elsewhere take column values positions corresponding neuron inputs subset question zeros elsewhere denote matrix maximum overall connectivity subgraph question internal connecpressed matrix tivity subgraph partial inconsistency landscape warmus numbers another example natural separate inputs accumulator comes considering scheme computation warmus numbers explain first warmus numbers considering particular scheme computation question natural context partial inconsistency vector semantics presence partial inconsistency approximation spaces tends become embedded vector spaces one example phenomenon one allows negative values probabilities probabilistic powerdomain embedded space signed measures natural setting denotational semantics probabilistic programs warmus numbers another example involves algebraic extension interval numbers respect addition interval numbers form group respect addition however one extend pseudosegments contradictory property example pseudosegment expressing interval number contradictory constraint time extended space interval numbers group vector space reals first discovery construction known made warmus since rediscovered many times rather extensive bibliography related rediscoveries see vectors matrices straightforward way represent vectors vectors finite number nonzero elements setup represent matrix rows well means reserving finite countable number appropriately typed neurons represent coordinates example describe vectors representing characters encoding standard neural nets one would need reserve neurons represent letters alphabet question partial inconsistency landscape number common motives appear multiple times various studies partial inconsistency particular bilattices bitopology bicontinuous domains facilities nonmonotonic inference involutions etc together motives serve focal elements field study named partial inconsistency landscape particular following situation typical context bitopological groups two topologies group dual group inverse induces bijection respective systems open sets antimonotonic group inverse involution bicontinuous map bitopological dual approximation domains tend become embedded vector spaces context setting bicontinuous domains equipped two scott topologies tend group dual seems natural semantic studies computations linear streams accumulators revised continue line thought started section give couple examples illustrating natural separate inputs accumulator main example neuron self producing matrix controlling network output taking additive updates matrix input matrix finite number nonzero elements represented sparse matrix via dictionary nonzero elements typical situation additive update time step small compared matrix specifically update typically small sense number affected matrix elements small compared overall number nonzero matrix elements make much sense actually copy output self input self perform additive update done definition accumulator one input taken literally done instead additive updates added together input self movement self add sum updates matrix accumulates instead hiding logic implementation details makes sense split inputs self output self connected weight nothing else connected weight copying output self accumulating additive updates self pure dataflow matrix machines version computing warmus numbers section provides detailed overview partial inconsistency landscape including bitopological bilattice properties warmus numbers turns warmus numbers play fundamental role mathematics partial inconsistency particular section paper proposes schema computation via monotonic evolution punctuated involutive steps computations warmus extension interval numbers via monotonic evolution punctuated involutive steps good example accumulators asymmetry accumulator neuron accumulate monotonically evolving warmus number accepting additive updates number arbitrary warmus number must pseudosegment case allowed given constraint kind natural want accumulate contributions separate input movement let accumulator enforce constraint movement ignoring requests updates yet another input might added trigger involutive steps involutive step context transforms alternatively requests updates might trigger involutions normally involution would triggered accumulated number already pseudosegment case involution step karpathy unreasonable effectiveness recurrent neural networks http keimel bicontinuous domains old problems domain theory electronic notes theoretical computer science kozen semantics probabilistic programs journal computer system sciences krishnaswami reactive programming without spacetime leaks acm sigplan notices lawson stably compact spaces mathematical structures computer science matthews adding second order functions kahn data flow technical report research report university warwick http pollack connectionist models natural language processing phd thesis university illinois chapter available http open problem bicontinuous reflexive domains despite impressive progress studies bicontinuity bitopology context partial inconsistency landscape issues related reflexive domains solutions recursive domain equation context bicontinuous domains vector semantics seem well understood given dataflow matrix machines equipped facilities work directly level vector spaces one would hope gap operational denotational descriptions would narrow case traditional situations untyped popova arithmetic proper improper intervals repository literature interval algebraic extensions http siegelmann sontag computational power neural nets journal computer system sciences wadge lucid jagannathan editor proceedings international symposium lucid intensional programming pages http conclusion dataflow matrix machines work arbitrary linear streams paper focus case pure dataflow matrix machines work single kind linear streams namely streams matrices defining connectivity patterns weights pure dmms allows pinpoint key difference pure dmms recurrent neural networks instead working streams numbers pure dataflow matrix machines work streams programs programs represented network connectivity matrices warmus calculus approximations bull acad pol iii http zhou zhang zhou minimal gated unit recurrent neural networks http monotonic evolution additions warmus numbers conventional interval numbers consider sequence elements monotonically increasing obtained additive corrections previous elements sequence andima kopperman nickolas asymmetric ellis conventional interval numbers situation theorem topology applications possible trivial case addition bukatin matthews linear models computation reduce degree imprecision conprogram learning gottlob editors gcai easychair ventional interval numbers possible perform nontrivial proceedings computing vol pages http monotonic evolution conventional interval numbers adding interval numbers previous elements sequence bukatin matthews radul dataflow matrix machines question programmable dynamically expandable generalized warmus numbers monotonic evolution additive correcrecurrent neural networks http tions possible provided every additive correction summand bukatin matthews radul programming patterns zero dataflow matrix machines generalized recurrent neural nets http references farnell designing sound mit press rectifiers fluid project fluid github repository https rectified linear unit relu neuron activation function max recent years relu became popular neuron context deep networks whether equally good recurrent networks remains seen activation function max integral heaviside step function lack smoothness seem interfere gradient methods used neural net training interestingly enough standard reals associated upper lower topologies reals closely related relu goodman mansinghka roy bonawitz tenenbaum church language generative models proc uncertainty artificial intelligence http johnston hanna millar advances dataflow programming languages acm computing surveys jung moshier bitopological nature stone duality technical report school computer science university birmingham http pure dataflow matrix machines version linear bilinear neurons lstm gated recurrent unit networks bers vector matrices accounts factoring dimension various schemas recurrent networks gates memory found useful overcoming problem vanishing gradients training recurrent neural networks starting lstm including variety schemas convenient compact overview lstm gated recurrent units networks related schemas see section standard way describe lstm gated recurrent unit networks think networks sigmoid neurons augmented external memory gating mechanisms however long understood used present paper neurons linear activation functions used accumulators implement memory also known least years bilinear neurons neurons multiplying two inputs inputs accumulating linear combinations output signals neurons used modulate signals via multiplicative masks gates implement conditional constructions fashion see also section looking formulas ltsm gated recurrent unit networks table one observe instead thinking networks networks sigmoid neurons augmented external memory gating mechanisms one describe simply recurrent neural networks built sigmoid neurons linear neurons bilinear neurons without external mechanisms ltsm gated recurrent unit networks built recurrent neural networks sigmoid neurons linear neurons bilinear neurons weights variable subject training weights fixed zeros ones establish particular network topology software prototypes prototyped lightweight pure dmms processing lightweight pure dmms directory project fluid open source project dedicated experiments computational architectures based linear streams simplicity used numbers index rows columns matrices instead using semantically meaningful strings recommend use indices work particular demonstrated experiments enough consider set several constant update matrices together network update mechanism described present paper create oscillations network weights waves network connectivity patterns aug experiment directory assume neuron self adds matrices movement obtain matrix assume starting moment assume constant matrix network starts movement first movement becomes copy becomes copy first movement time changes sign second movement becomes minus second movement time changes sign etc obtained simple oscillation network weight network matrix given moment time lightweight pure dataflow matrix machines aug experiment directory pure dataflow matrix machines networks finite part network active given moment time process streams matrices finite number elements sometimes convenient consider case networks finite size fixed number inputs fixed number outputs still would like networks process streams matrices describing network weights topology matrices would finite rectangular matrices call resulting class networks lightweight pure dmms work reals limited precision consider fixed values resulting class memory space finite however often useful consider class didactic purposes theoretical constructions software prototypes tend simpler case many computational effects already illustrated generality instead take collection constant update matrices like previous example make sure first rows indexed matrices second rows indexed take rest elements second rows matrices start matrix first row second row containing element one easily see verify downloading running processing open source software lightweight pure experiment directory project fluid moment element second row moment element second row moment wave network tivity pattern loops back continues looping indefinitely states dimension network operators network outputs matrix hence overall dimension output space network inputs matrix hence overall dimension input space overall dimension space possible linear operators outputs inputs could potentially used movement however model actually uses matrices dimension movement subspace dimension overall space possible linear operators dimension allowed matrix applied vector numbers vector matrices yields vector pure dataflow matrix machines version final remarks actual implementation self prototype enforces constraint making update matrices dynamically dependent upon input symbols one could embed arbitrary deterministic finite automaton control mechanism fashion | 6 |
accepted workshop contribution iclr earning onger emory ecurrent eural etworks apr tomas mikolov armand joulin sumit chopra michael mathieu marc aurelio ranzato facebook artificial intelligence research broadway new york city usa tmikolov ajoulin spchopra myrhev ranzato bstract recurrent neural network powerful model learns temporal patterns sequential data long time believed recurrent networks difficult train using simple optimizers stochastic gradient descent due vanishing gradient problem paper show learning longer term patterns real data natural language perfectly possible using gradient descent achieved using slight structural modification simple recurrent neural network architecture encourage hidden units change state slowly making part recurrent weight matrix close identity thus forming kind longer term memory evaluate model language modeling tasks benchmark datasets obtain similar performance much complex long short term memory lstm networks hochreiter schmidhuber ntroduction models sequential data natural language speech video core many machine learning applications widely studied past approaches taking roots variety fields goodman young koehn particular models based neural networks successful recently obtaining performances automatic speech recognition dahl language modeling mikolov video classification simonyan zisserman models mostly based two families neural networks feedforward neural networks recurrent neural networks feedforward architectures neural networks usually represent time explicitly window recent history rumelhart type models work well practice fixing window size makes dependency harder learn done cost linear increase number parameters recurrent architectures hand represent time recursively example simple recurrent network srn elman state hidden layer given time conditioned previous state recursion allows model store complex signals arbitrarily long time periods state hidden layer seen memory model theory architecture could even encode perfect memory simply copying state hidden layer time theoretically powerful recurrent models widely considered hard train due vanishing exploding gradient problems hochreiter bengio mikolov showed avoid exploding gradient problem using simple yet efficient strategy gradient clipping allowed efficiently train models large datasets using simple tools stochastic gradient descent time williams zipser werbos nevertheless simple recurrent networks still suffer vanishing gradient problem gradients propagated back time magnitude almost always exponentially shrink makes memory srns focused short term patterns practically ignoring longer accepted workshop contribution iclr term dependencies two reasons happens first standard nonlinearities sigmoid function gradient close zero almost everywhere problem partially solved deep networks using rectified linear units relu nair hinton second gradient backpropagated time magnitude multiplied recurrent matrix eigenvalues matrix small less one gradient converge zero rapidly empirically gradients usually close zero steps backpropagation makes hard simple recurrent neural networks learn long term patterns many architectures proposed deal vanishing gradients among long short term memory lstm recurrent neural network hochreiter schmidhuber modified version simple recurrent network obtained promising results hand writing recognition graves schmidhuber phoneme classification graves schmidhuber lstm relies fairly sophisticated structure made gates control flow information hidden neurons allows network potentially remember information longer periods another interesting direction considered exploit structure hessian matrix respect parameters avoid vanishing gradients achieved using secondorder methods designed objective functions see section lecun unfortunately clear theoretical justification using hessian matrix would help best knowledge conclusive thorough empirical study topic paper propose simple modification srn partially solve vanishing gradient problem section demonstrate simply constraining part recurrent matrix close identity drive hidden units called context units behave like cache model capture long term information similar topic text kuhn mori section show model obtain competitive performance compared sequence prediction model lstm language modeling datasets odel imple recurrent network figure simple recurrent network recurrent network context features consider sequential data comes form discrete tokens characters words assume fixed dictionary containing tokens goal design model able predict next token sequence given past section describe simple recurrent network srn model popularized elman cornerstone work srn consists input layer hidden layer recurrent connection output layer see figure recurrent connection allows propagation time information state hidden layer given sequence tokens srn takes input encoding current token predicts probability next one current token representation prediction hidden layer units store additional information previous tokens seen sequence precisely time state accepted workshop contribution iclr hidden layer updated based previous state encoding current token according following equation axt exp sigmoid function applied coordinate wise token embedding matrix matrix recurrent weights given state hidden units network outputs probability vector next token according following equation function output matrix cases size dictionary significant tokens standard language modeling tasks computing normalization term function often type architecture common trick introduced goodman replace function hierarchical use simple hierarchy two levels binning tokens clusters cumulative word frequency reduces complexity computing cost lower performance around loss perplexity mention explicitly use approximation experiments model trained using stochastic gradient descent method time rumelhart williams zipser werbos use gradient renormalization avoid gradient explosion practice strategy equivalent gradient clipping since gradient explosions happen rarely reasonable used details implementation given experiment section generally believed using strong nonlinearity necessary capture complex patterns appearing data particular class mapping neural network learn input space output space depends directly nonlinearities along number hidden layers sizes however nonlinearities also introduce socalled vanishing gradient problem recurrent networks vanishing gradient problem states gradients get propagated back time magnitude quickly shrinks close zero makes learning longer term patterns difficult resulting models fail capture surrounding context next section propose simple extension srn circumvent problem yielding model retain information longer context ontext features section propose extension srn adding hidden layer specifically designed capture longer term dependencies design layer following two observations nonlinearity cause gradients vanish fully connected hidden layer changes state completely every time step srn uses fully connected recurrent matrix allows complex patterns propagated time suffers fact state hidden units changes rapidly every time step hand using recurrent matrix equal identity removing nonlinearity would keep state hidden layer constant every change state would come external inputs allow retain information longer period time precisely rule would bxt context embedding matrix solution leads model trained efficiently indeed gradient recurrent matrix would never vanish would require propagation gradients beginning training set many variations around type memory studied past see mozer overview existing models models based srn accepted workshop contribution iclr recurrent connections hidden units differ diagonal weights recurrent matrix constrained recently pachitariu sahani showed type architecture achieve performance similar full srn size dataset model small type architecture potentially retain information longer term statistics topic text scale well larger datasets pachitariu sahani besides argued purely linear srns learned weights perform similarly combination cache models different rates information decay kuhn mori cache models compute probability next token given unordered representation longer history well known perform strongly small datasets goodman mikolov zweig show using contextual features additional inputs hidden layer leads significant improvement performance regular srn however work contextual features using standard nlp techniques learned part recurrent model work propose model learns contextual features using stochastic gradient descent features state hidden layer associated diagonal recurrent matrix similar one presented mozer words model possesses fully connected recurrent matrix produce set quickly changing hidden units diagonal matrix encourages state context units change slowly see detailed model figure fast layer called hidden layer rest paper learn representations similar models slowly changing layer called context layer learn topic information similar cache models precisely denoting state context units time update rules model bxt axt parameter matrix note nonlinearity applied state context units contextual hidden units seen exponentially decaying bag words representation history exponential trace memory denoted mozer already proposed context simple recurrent networks jordan mozer close idea work use leaky integration neurons jaeger also forces neurons change state slowly however without structural constraint scrn evaluated dataset use penn treebank bengio interestingly results observed experiments show much bigger gains stronger baseline using model shown later alternative model interpretation consider context units additional hidden units activation function see model srn constrained recurrent matrix hidden context units identity matrix square matrix size sum number hidden context units reformulation shows explicitly structural modification elman srn elman constrain diagonal block recurrent matrix equal reweighed identity keep block equal reason call model structurally constrained recurrent network scrn adaptive context features fixing weight constant forces hidden units capture information time scale hand allow weight learned unit potentially capture context different time delays pachitariu sahani precisely denote recurrent matrix contextual hidden layer consider following update rule state contextual hidden layer accepted workshop contribution iclr bxt diagonal matrix diagonal elements suppose diagonal elements obtained applying sigmoid transformation parameter vector diag parametrization naturally forces diagonal weights stay strictly study following section situations learning weights help interestingly show learning weights seem important long one uses also standard hidden layer model xperiments evaluate model language modeling task two datasets first dataset penn treebank corpus consists words training set data division training validation test parts mikolov performance dataset achieved zaremba using combination many big regularized lstm recurrent neural network language models lstm networks first introduced language modeling sundermeyer second dataset moderately sized called composed version first million characters wikipedia dump split training part first characters development set last characters use report performance constructed vocabulary replaced words occur less times unk token resulting vocabulary size simplify reproducibility results released scrn code scripts construct datasets section compare performance proposed model standard srns lstm rnns becoming architecture choice modeling sequential data dependencies mplementation etails used torch library implemented proposed model following graph given figure note following alternative interpretation model recurrent matrix defined model could simply implemented modifying standard srn fix unless stated otherwise number backpropagation time bptt steps set model chosen parameter search validation set normal srn use bptt steps gradients vanish faster stochastic gradient descent every forward steps model trained batch gradient descent size learning rate divide learning rate training epoch validation error decrease esults enn reebank orpus first report results penn treebank corpus using small moderately sized models respect number hidden units table shows structurally constrained recurrent network scrn model achieve performance comparable lstm models small datasets relatively small numbers parameters noted lstm models significantly parameters size hidden layer making comparison somewhat unfair input forget output gates lstm parameters srn size hidden layer comparison leaky neurons also favor scrn bengio report perplexity reduction srn srn leaky neurons dataset observed much bigger improvement going perplexity srn scrn table also shows scrn outperforms srn architecture even much less parameters seen comparing performance scrn hidden contextual units test scrn code downloaded http accepted workshop contribution iclr perplexity versus srn hidden units perplexity suggests imposing structure recurrent matrix allows learning algorithm capture additional information obtain evidence additional information longer term character run experiments dataset contains various topics thus longer term information affects performance dataset much model ngram ngram cache srn srn srn lstm lstm lstm scrn scrn scrn scrn hidden context validation perplexity test perplexity table results penn treebank corpus baseline simple recurrent nets srn long short term memory rnns lstm structurally constrained recurrent nets scrn note lstm models parameters srns size hidden layer earning elf ecurrent eights evaluate influence learning diagonal weights recurrent matrix contextual layer following experiments used hierarchical classes penn treebank corpus speedup experiments table show size hidden layer small learning diagonal weights crucial result confirms findings pachitariu sahani however increase size model use sufficient number hidden units learning weights give significant improvement indicates learning weights contextual units allows units used representation history contextual units specialize recent history example close contextual units would part simple bigram language model various learned weights model seen combination cache bigram models number standard hidden units enough capture short term patterns learning weights seem crucial anymore keeping observation mind fixed diagonal weights working corpus model scrn scrn scrn scrn scrn scrn hidden context fixed weights adaptive weights table perplexity test set penn treebank corpus without learning weights contextual features note experiments used hierarchical esults ext next experiment involves corpus significantly larger penn treebank dataset contains various articles wikipedia longer term information current topic plays bigger role previous experiments illustrated gains cache added baseline model perplexity drops reduction accepted workshop contribution iclr report experiments range model configurations different number hidden units table show increasing capacity standard srns adding contextual features results better performance example add contextual units srn hidden units perplexity drops reduction model also much better srn hidden units perplexity model scrn scrn scrn hidden context context context context context table structurally constrained recurrent nets perplexity various sizes contextual layer reported development set dataset table see number hidden units small model better lstm despite lstm model hidden units larger scrn hidden contextual features achieves better performance hand size models increase see best lstm model slightly better best scrn perplexity versus perplexity gains lstm scrn srn much significant penn treebank experiments seems likely models actually model kind patterns language model srn srn srn lstm lstm lstm scrn scrn scrn hidden context perplexity development set table comparison various recurrent network architectures evaluated development set onclusion paper shown learning longer term patterns real data using recurrent networks perfectly doable using standard stochastic gradient descent introducing structural constraint recurrent weight matrix model interpreted quickly changing hidden layer focuses short term patterns slowly updating context layer retains longer term information empirical comparison scrn long short term memory lstm recurrent network shows similar behavior two language modeling tasks similar gains simple recurrent network models tuned best accuracy moreover scrn shines cases size models constrained similar number parameters often outperforms lstm large margin especially useful cases amount training data practically unlimited even models thousands hidden neurons severely underfit training datasets believe findings help researchers better understand problem learning longer term memory sequential data model greatly simplifies analysis implementation recurrent networks capable learning longer term patterns published code allows easily reproduce experiments described paper time noted none models capable learning truly long term memory different nature example would want build model accepted workshop contribution iclr store arbitrarily long sequences symbols reproduce later would become obvious doable models finite capacity possible solution use recurrent net controller external memory unlimited capacity example joulin mikolov memory used task however lot research needs done direction develop models successfully learn grow complexity size solving increasingly difficult tasks eferences bengio yoshua simard patrice frasconi paolo learning dependencies gradient descent difficult neural networks ieee transactions bengio yoshua nicolas pascanu razvan advances optimizing recurrent networks icassp dahl george dong deng acero alex deep neural networks speech recognition audio speech language processing ieee transactions elman jeffrey finding structure time cognitive science goodman joshua classes fast maximum entropy training acoustics speech signal processing icassp ieee international conference volume ieee goodman joshua bit progress language modeling computer speech language graves alex schmidhuber juergen offline handwriting recognition multidimensional recurrent neural networks advances neural information processing systems graves alex schmidhuber framewise phoneme classification bidirectional lstm neural network architectures neural networks hochreiter sepp vanishing gradient problem learning recurrent neural nets problem solutions international journal uncertainty fuzziness systems hochreiter sepp schmidhuber long memory neural computation jaeger herbert mantas popovici dan siewert udo optimization applications echo state networks neurons neural networks jordan michael attractor dynamics parallelism connectionist sequential machine proceedings eighth annual conference cognitive science society joulin armand mikolov tomas inferring algorithmic patterns recurrent nets arxiv preprint koehn philipp hoang hieu birch alexandra chris federico marcello bertoldi nicola cowan brooke shen wade moran christine zens richard moses open source toolkit statistical machine translation proceedings annual meeting acl interactive poster demonstration sessions association computational linguistics kuhn roland mori renato natural language model speech recognition pattern analysis machine intelligence ieee transactions lecun yann bottou leon orr genevieve klaus efficient backprop neural networks tricks trade accepted workshop contribution iclr mikolov statistical language models based neural networks phd thesis thesis brno university technology mikolov tomas zweig geoffrey context dependent recurrent neural network language model slt mikolov tomas kombrink stefan burget lukas cernocky khudanpur sanjeev extensions recurrent neural network language model acoustics speech signal processing icassp ieee international conference ieee mozer michael focused algorithm temporal pattern recognition complex systems mozer michael neural net architectures temporal sequence processing santa institute studies sciences complexity volume publishing nair vinod hinton geoffrey rectified linear units improve restricted boltzmann machines proceedings international conference machine learning pachitariu marius sahani maneesh regularization nonlinearities neural language models needed arxiv preprint rumelhart david hinton geoffrey williams ronald learning internal representations error propagation technical report dtic document simonyan karen zisserman andrew convolutional networks action recognition videos advances neural information processing systems sundermeyer martin ralf ney hermann lstm neural networks language modeling interspeech werbos paul generalization backpropagation application recurrent gas market model neural networks williams ronald zipser david learning algorithms recurrent networks computational complexity theory architectures applications young steve evermann gunnar gales mark hain thomas kershaw dan liu xunying moore gareth odell julian ollason dave povey dan htk book volume entropic cambridge research laboratory cambridge zaremba wojciech sutskever ilya vinyals oriol recurrent neural network regularization arxiv preprint | 9 |
convex regularization apr tensor regression garvesh ming han university abstract paper present general convex optimization approach solving highdimensional multiple response tensor regression problems structural assumptions consider using convex weakly decomposable regularizers assuming underlying tensor lies unknown subspace within framework derive general risk bounds resulting estimate fairly general dependence structure among covariates framework leads upper bounds terms two simple quantities gaussian width convex set tensor space intrinsic dimension tensor subspace best knowledge first general framework applies multiple response problems general bounds provide useful upper bounds rates convergence number fundamental statistical models interest including regression vector models tensor models pairwise interaction models moreover many settings prove resulting estimates minimax optimal also provide numerical study validates theoretical guarantees demonstrates breadth framework departments statistics computer science optimization group wisconsin institute discovery university university avenue madison research garvesh raskutti supported part nsf grant morgridge institute research department statistics university university avenue madison research ming yuan han chen supported part nsf frg grant nih grant introduction many modern scientific problems involve solving statistical problems sample size small relative ambient dimension underlying parameter estimated past decades large amount work solving problems imposing structure parameter interest particular sparsity subspace assumptions studied extensively terms development fast algorithms theoretical guarantees see buhlmann van geer hastie overview prior work focussed scenarios parameter interest vector matrix increasingly common practice however parameter object estimated naturally higher order tensor structure examples include hyperspectral image analysis computed tomography semerci radar signal processing sidiropoulos nion audio classification mesgarani text mining cohen collins among numerous others much less clear low dimensional structures inherent problems effectively accounted main purpose article fill void provide general unifying framework consider general tensor regression problem covariate tensors response tensors rdm related unknown parameter interest independent identically distributed noise tensors whose entries independent identically distributed centred normal random variables variance simplicity assume covariates gaussian fairly general dependence assumptions notation refer throughout paper standard inner product taken appropriate euclidean spaces hence usual inner product rdm entry given goal tensor regression estimate coefficient tensor based observations addition canonical example tensor regression scalar response many commonly encountered regression problems also special cases general tensor regression model regression see anderson vector autoregressive model see pairwise interaction tensor model see rendle notable examples article provide general treatment seemingly different problems main focus situations dimensionality large compared sample size many practical settings true regression coefficient tensor may certain types structure high ambient dimension regression coefficient tensor essential account structure estimating sparsity common examples low dimensional structures case tensors sparsity could occur level level level depending context leading different interpretations also multiple ways may present comes higher order tensors either original tensor level matricized tensor level article consider general class convex regularization techniques exploit either type structure particular consider standard convex regularization framework arg min regularizer norm tuning parameter hereafter tensor kakf derive general risk bounds family weakly decomposable regularizers fairly general dependence structure among covariates general upper bounds apply number concrete statistical inference problems including aforementioned regression vector models tensor models pairwise interaction tensors show typically optimal minimax sense developing general results make several contributions fast growing literature high dimensional tensor estimation first provide unified principled approach exploit low dimensional structure tensor problems incorporate extension notion decomposability originally introduced negahban vector matrix models weak decomposability previously introduced van geer allows handle delicate tensor models nuclear norm regularization tensor models moreover provide regularized least squared estimate given general risk bound easily interpretable condition design tensor risk bound derive presented terms merely two geometric quantities gaussian width depends choice regularization intrinsic dimension subspace tensor lies believe first general framework applies multiple responses general dependence structure covariate tensor finally general results lead novel upper bounds several important regression problems involving tensors regression models pairwise interaction models also prove resulting estimates minimiax rate optimal appropriate choices regularizers framework incorporates tensor structure multiple responses present number challenges compared previous approaches challenges manifest terms choice regularizer technical challenges proof main result firstly since notion generic tensors meaning number choices convex regularizer must satisfy form weak decomposability provide optimal rates multiple responses flexible dependence structure among covariates also present significant technical challenges proving restricted strong convexity key technical tool establishing rates convergence particular uniform law lemma required instead classical techniques developed negahban wainwright raskutti apply univariate responses remainder paper organized follows section introduce general framework using weakly decomposable regularizers exploiting structures high dimensional tensor regression section present general upper bound weakly decomposable regularizers discuss specific risk bounds commonly used sparsity regularizers tensors section apply general result three specific statistical problems namely regression multivariate autoregressive model pairwise interaction model show three examples appropriately chosen weakly decomposable regularizers leads minimax optimal estimation unknown parameters numerical experiments presented section demonstrate merits breadth approach proofs provided section methodology recall regularized estimate given arg min brevity assume implicitly hereafter minimizer left hand side uniquely defined development actually applies general case taken arbitrary element set minimizers particularly interest weakly decomposable convex regularizers extending similar concept introduced negahban vectors matrices let arbitrary linear subspace orthogonal complement call regularizer weakly decomposable respect pair exist constant particular holds say weakly decomposable respect general version concept first introduced van geer norm triangular inequality also many commonly used regularizers tensors weakly decomposable decomposable short definition decomposability naturally extends similar notion vectors matrices introduced negahban also allow general choices ensure wider applicability example shall see popular tensor nuclear norm regularizer decomposable respect appropriate linear subspaces decomposable described catalogue commonly used regularizers tensors argue decomposable respect appropriately chosen subspaces fix ideas shall focus follows estimating tensor although discussion straightforwardly extended tensors sparsity regularizers obvious way encourage sparsity impose vector penalty entries following idea lasso linear regression see tibshirani canonical example decomposable regularizers fixed write clear defined decomposable respect many applications sparsity arises structured fashion tensors example fiber slice tensor likely zero simultaneously fibers tensor collection vectors fibers defined fashion fix ideas focus fibers sparsity among fibers exploited using regularizer similar group lasso see yuan lin stands usual vector norm similar vector regularizer group regularizer also decomposable fixed write clear defined decomposable respect note defining regularizer instead vector norm norms could also used see turlach sparsity could also occur slice level slices tensor collection matrices let arbitrary norm matrices following group regularizer considered typical examples matrix norm used include frobenius norm nuclear norm among others case used decomposable regularizer respect consider case use matrix nuclear norm let two sequences projection matrices respectively let pinching inequality see bhatia derived decomposable respect regularizers addition sparsity one may also consider tensors multiple notions rank tensors see koldar bader recent review particular rank defined smallest number tensors needed represent tensor encourage low rank estimate consider nuclear norm regularization following yuan zhang define nuclear norm dual norm specifically let spectral norm given kaks max kuk kvk kwk nuclear norm defined max kbks shall consider regularizer show also weakly decomposable regularizer let projection matrix rdk define write lemma projection matrices rdk lemma direct consequence characterization tensor nuclear norm given yuan zhang viewed tensor version pinching inequality matrices write lemma defined weakly decomposable respect note counterexample also given yuan zhang shows tensor nuclear norm take another popular way define tensor rank tucker decomposition recall tucker decomposition tensor form orthogonal matrices core tensor two slices orthogonal triplet referred tucker ranks hard see holds tucker ranks equivalently interpreted dimensionality linear spaces spanned respectively following relationship holds rank tucker ranks max min convenient way encourage low tucker ranks tensor matricization let denote matricization tensor matrix whose column vectors fibers also defined fashion clear rank natural way encourage therefore nuclear norm regularization kmk pinching inequality matrices defined also decomposable respect risk bounds decomposable regularizers establish risk bounds general decomposable regularizers particular bounds given terms gaussian width suitable set tensors recall gaussian width set given supha tensor whose entries independent random variables see gordon details gaussian width note gaussian width geometric measure volume set related volumetric characterizations see pisier also define unit ball follows impose mild assumption kakf ensures regularizer encourages structure define quantity relates size norm frobenius norm kakf subspace following negahban subspace define compatibility constant kakf sup interpreted notion intrinsic dimensionality turn attention covariate tensor denote vec vectorized covariate ith sample slight abuse notation write vec concatenated covariates samples convenience let brevity assume gaussian design cov rndm technical work results may extended beyond gaussian designs note require sample tensors independent shall assume bounded eigenvalues later verify number statistical examples let represent smallest largest eigenvalues matrix respectively follows shall assume constants note particular covariates independent identically distributed block diagonal structure boils similar conditions cov however general applicable settings may dependent models shall discuss detail section position state main result risk bounds terms frobenius norm empirical norm tensor define kha main reason focus random gaussian design prove uniform law relates empirical norm defined frobenius norm tensor see lemma lemma analogous restricted strong convexity defined negahban since dealing multiple responses refined technique required prove lemma theorem suppose holds tensor linear subspace holds let defined regularizer decomposable respect linear subspace exists constant probability least exp max ktb ktb sufficiently large assuming right hand side converges zero increases stated theorem upper bound boils bounding two quantities purely geometric quantities provide intuition captures large norm relative norm captures low dimension subspace several technical remarks order note expressed expectation dual norm according see rockafellar details dual norm given sup supremum taken tensors dimensions straightforward see best knowledge first general result applies multiple responses mentioned earlier incorporating multiple responses presents technical challenge see lemma uniform law analogous restricted strong convexity theorem focusses gaussian design results extended random design using sophisticated techniques see mendelson zhou fixed design assuming covariates deterministically satisfy condition lemma since focus paper general dependence structure assume random gaussian design one important practical challenge typically unknown clearly influence choice common challenge statistical inference address issue paper practice typically chosen sophisticated choice based estimation constants remains open question another important open question choices upper bound optimal constant section provide specific examples provide minimax lower bounds match upper bounds constant however see tensor regression tensor regression discussed section aware convex regularizer matches minimax lower bound develop upper bounds quantities different scenarios previous section shall focus third order tensor rest section ease exposition sparsity regularizers first consider sparsity regularizers described previous section sparsity recall vectorized regularizer could used exploit sparsity clearly max shown lemma exists constant log let arbitrary write decomposable respect defined easy verify kbkf sup light theorem implies log sup max high probability taking log regularized least squares estimate defined using regularizer similar argument also applied sparsity fix ideas consider sparsity among fibers case use group lasso type regularizer max lemma exists constant let max log similar previous case arbitrary write decomposable respect defined easy verify kbkf sup light theorem implies max log sup max high probability taking max log regularized least squares estimate defined using regularizer comparing rates sparsity regularization see benefit using group lasso type regularizer sparsity likely occur fiber level specifically consider case total nonzero entries nonzero fibers regularization applied achieve risk bound log hand group regularization applied risk bound becomes max log nonzero entries clustered fibers may expect case enjoys performance superior since log larger max log sparsity structure consider sparsity structure fix ideas consider sparsity among slices discussed previous section two specific types regularizers could employed recall denotes nuclear norm matrix sum singular values note max following result lemma exists constant let max log arbitrary write decomposable respect defined easy verify kbkf sup based theorem implies max log sup max high probability taking max log regularized least squares estimate defined using regularizer alternatively max following lemma exists constant consider max log rank arbitrary denote projection onto row column space respectively clear defined addition recall decomposable respect defined hard see derive lemma kbkf sup light theorem implies max log sup max high probability taking max log regularized least squares estimate defined using regularizer comparing rates estimates regularizers see benefit using nonzero slices likely particular consider case nonzero slices nonzero slice rank applying leads risk bound max log whereas applying leads max log clear better estimator regularizers consider regularizers encourages low rank estimates begin tensor nuclear norm regularization recall kaks lemma exists constant let max arbitrary denote projection onto linear space spanned fibers respectively argued previous section weakly decomposable respect defined respectively lemma sup kbkf lemmas show sup max high probability taking regularized least squares estimate defined using regularizer next consider regularization via matricization hard see max lemma exists constant max hand lemma kbkf sup lemmas suggest max sup max high probability taking max regularized least squares estimate defined using regularizer comparing rates estimates regularizers see benefit using apply regularizer compared risk bound matricized regularization max obviously always outperform since min advantage typically rather significant since general min hand amenable computation upper bounds frobenius error novel results complement existing results tensor completion gandy yuan zhang neither minimax optimal remains interesting question whether exists convex regularization approach minimax optimal specific statistical problems section apply results several concrete examples attempting estimate tensor certain sparse low rank constraints show regularized least squares estimate typically minimiax rate optimal appropriate choices regularizers particular focus aspect general framework provide novel upper bounds matching minimax lower bounds regression large first example consider regression model represents index sample represents index response represents index feature regression problem represents total number responses represent total number parameters since setting large small number relevant define subspace furthermore assume entry corresponds feature response simplicity assume independent penalty function considering gaussian covariance corresponding dual function applied gaussian tensor max theorem regression model independent gause sian design max log converges zero increases exist constants probability least max ktb ktb sufficiently large regularized least squares estimate defined regularizer given addition max log min max kte constant probability least minimum taken estimators based data theorem shows taking max log regularized least squares estimate defined regularizer given achieves minimax optimal rate convergence parameter space alternatively settings effect covariates multiple tasks may low rank structure situation may consider rank appropriate penalty function case corresponding dual function applied max theorem regression model independent gause sian design max log converges zero increases exist constants probability least max ktb ktb sufficiently large regularized least squares estimate defined regularizer given addition max log min max kte constant probability least minimum taken estimators based data theorem shows taking max log regularized least squares estimate defined regularizer given achieves minimax optimal rate convergence parameter space comparing optimal rates estimating tensor one see benefit importance take advantage extra low rankness true coefficient tensor indeed far aware first results provide upper bounds matching minimax lower bounds regression sparse slices pointed earlier challenge going scalar multiple response proving lemma analog restricted strong convexity multivariate sparse models consider setting vector models case generative model represents time index represents lag index vector represents additive noise note parameter tensor tensor represents variable variable lag model studied basu michailidis relatively small avoid introducing dependence large main results allow general structure regularization schemes considered basu michailidis since assume number series large possible interactions series assume interactions total penalty function considering kak corresponding dual function applied max kgk challenge setting highly dependent use results developed basu michailidis prove satisfied prior presenting main results introduce concepts developed basu michailidis play role determining constants relate stability processes gaussian time series defined matrix function cov define spectral density function ensure spectral density bounded make following assumption ess sup define matrix polynomial denote matrices represents point complex plane note stable invertible process also define lower extremum spectral density ess inf note satisfy following bounds min max straightforward calculation fixed hence state main result models theorem vector model defined max log converges zero increases exist constants probability least max max sufficiently large regularized least squares estimators defined regularizer given addition max log min max constant probability least minimum taken estimators based data theorem provides best knowledge lower bound result multivariate time series upper bound also novel different proposition basu michailidis since impose sparsity large directions lags whereas basu michailidis impose sparsity vectorization note proposition basu michailidis follows directly lemma using sparsity regularizer basu michailidis vectorize problem prove restricted strong convexity whereas since leave problem problem requried refined technique used proving lemma pairwise interaction tensor models finally consider tensor regression follows pairwise interaction model specifically independent copies random couple pairwise interaction used originally rendle rendle schmidtthieme personalized tag recommendation later analyzed chen hoff briefly introduced single index additive model amongst tensor models pairwise interaction model regularizer consider hard see defined decomposable respect projection matrices let max rank simplicity assume gaussian design theorem pairwise interaction model max converges zero increases exist constants probability least min max sufficiently large regularized least squares estimate defined regularizer given addition max min max kte constant probability least minimum taken estimate based data settings theorem establishes minimax optimality regularized least squares estimate using appropriate convex decomposable regularizer since single response norm involves matricization result straightforward extension earlier results numerical experiments section provide series numerical experiments support theoretical results display flexibility general framework particular consider several different models including tensor regression scalar response section tensor regression section regression group sparsity regularizers section sparse autoregressive models section pairwise interaction models section perform simulations computationally tractable way adapt block coordinate descent approaches case developed simon developed qin univariate response settings capture group sparsity regularizers fix ideas numerical experiments covariate tensors independent standard gaussian ensembles except multivariate models noise random tensors elements following independently choice tuning parameter adopt grid search find one least estimation error terms mean squared error numerical examples tensor regression first consider tensor regression model regression coefficient tensor generated follows first slices standard normal ensembles remaining slices set zero naturally consider regularizer min ikf figure shows mean squared error estimate averaged runs standard mse mse mse figure mean squared error regularization third order tensor regression plot based simulation runs error bars panel represent one standard deviation deviation versus respectively left middle panels set whereas right panel fixed observe mean squared error increases approximately according agrees risk bound given lemma also considered setting specifically nonzero slices random matrices case lowrankness regularizer employed min performance estimate averaged simulation runs summarized figure mse mse mse figure mean squared error third order tensor regression slices tensor coefficients plot based simulation runs error bars panel represent one standard deviation left middle panels right panel results consistent theoretical results tensor regression although focused third order tensors brevity treatment applies higher order tensors well illustration consider fourth order models generate tensors impose low rank follows generate four independent groups independent random vectors unit length via performing svd gaussian random matrix two times keeping pairs leading singular vectors compute yielding tensor consider two different regularization schemes first impose structure matricization min secondly use square matricization follows min reshape fourth order tensor matrix collapsing first two indices last two indices respectively table shows average error rmse short approaches see approach appears superior approach also predicted theory snr rmse matricization rmse square matricization table tensor regression fourth order tensor covariates scale response based matricization rmse computed based simulations runs numbers parentheses standard errors regression general framework handle seamless fashion demonstration consider regression group sparsity regularizer specifically following model considered impose group sparsity first slices generated gaussian ensembles remaining slices set zero group sparsity regularizers used algorithm regression simon block coordinate descent nuclear norm penalty solutions mse mse mse figure matrix response regression sparse slices tensor coefficients plot based simulation runs error bars panel represent one standard deviation figure shows average standard deviation mean squared error runs versus parameter observe error increase approximately according log supports upper bound theorem also generated fashion figure plots average standard deviation mean squared error respectively results consistent main result theorem mse mse mse figure matrix response regression slices tensor coefficients plot based simulation runs error bars panel represent one standard deviation multivariate sparse models consider dependent covariates responses multivariate model recall generative model represents time index represents lag index vector represents additive noise consider four different structures choose entries sufficiently small ensure time series stable sparsity slices diagonal matrix diagonal elements constants zero slices sparse slices slices independent random matrix truncated matrix elements zero slices group sparsity lag sparse normal slices slices elements follow zero slices group sparsity coordinate sparse normal fibers vector normal elements following random sample size zero otherwise table shows average rmse runs case function general smaller larger harder recover coefficient findings consistent theoretical developments pairwise interaction tensor models finally consider pairwise interaction tensor models described section implement regularization scheme kept iterating among matrix slices updating one three time assuming two components fixed update conducted approximated projection onto subspace generalized gradient descent soft thresholding step step size gradient step gradient least square objective function singular space operator threshold approximated projection operator make given matrix zero row sums shifting rows zero column sums shifting columns simulated independent random matrix make zero column sums row sums table shows average standard deviation rmse different combinations runs general rmse estimating tensor coefficient increases increases diagonal slices diagonal slices slices slices gaussian slices gaussian slices gaussian fibers gaussian fibers vectorized sparsity slices group sparsity lag group sparsity coordinate snr rmse simulations runs numbers parentheses standard errors table multivariate model various rmse computed based coefficient tensor regularizer rmse snr table pairwise interaction model rmse computed based simulations runs numbers parentheses standard errors proofs section present proofs main results begin proof theorem proof theorem proof involves following main steps initial step use argument similar developed negahban exploit weak decomposability properties empirical risk minimizer convex duals upper bound ktb terms next use properties gaussian random variables supremum gaussian processes express lower bound terms gaussian width lemma final challenging step involves proving uniform law relating ktb ktb lemma analogous restricted strong convexity proof lemma uses novel truncation argument similar spirit lemma raskutti lemma necessary incorporate multiple responses existing results relating population norm dasgupta gupta raskutti van geer apply univariate functions throughout refers weakly decomposable regularizer tensor tensor shall write projections onto respect frobenius norm respectively since empirical minimizer substituting khx second inequality follows decomposability last one follows triangular inequality let tensor entry recall definition gaussian width simplicity let recall following lemma lemma probability least exp proof relies gaussian comparison inequalities concentration inequalities proof lemma recall set first show high probability using concentration lipschitz functions gaussian random variables see theorem appendix first prove function terms particular note sup sup arg maxa let sup sup sup sup sup kakf recall kakf implies second last inequality therefore function respect frobenius norm therefore applying theorem appendix sup sup therefore probability least exp exp complete proof use gaussian comparison inequality supremum process set recall sup recall rdm standard centered gaussian tensor entry variance vec gaussian vector covariance ndm ndm first condition indepedendent let standard normal gaussian tensors first condition indepedendent assuming using standard gaussian comparison inequality due lemma appendix proven earlier anderson condition get sup sup since cov vec ndm ndm define standard random vector conditioning dealing randomness kwj max standard normal tensor since standard normal upper bound max kwj using standard tail bounds since kwj random variable degrees freedom kwj exp using tail bounds provided appendix presented laurent massart taking union bound kwj max exp log provided log follows probability greater exp kwj therefore probability least exp apply slepian lemma slepian complete proof slepian lemma stated appendix applying slepian lemma lemma appendix sup sup substituting means completes proof light lemma remainder proof condition event event khx since get hence define cone know hence khx recall khx thus convenience remainder proof let split three cases max hand max hence case need consider iii follow similar proof technique proof theorem raskutti let define following set let define event let define alternative event claim suffices show holds probability least exp constant particular given arbitrary consider tensor since construction consequently sufficient prove holds high probability lemma assume exists exists exp proof lemma denote define random variable sup suffices show recall norm expand reacall define extension standard matricization rdm groups together first modes slight abuse notation follows vec rdm clearly vec rdm order complete proof make use truncation argument constant chosen later consider truncated quadratic function min define vec vec sign input tensor let pdn similarly pdn construction hence sup remainder proof consists showing suitable definition vec vec vec vec vec second last inequality follows inequality vec gaussian random nal inequality follows markov inequality since variable vec vec therefore setting summing implies implies prove high probability bound first upper bounding standard symmetrization argument see pollard shows vec sup rademacher random variables vec lipschitz function lipschitz constant since contrtaction inequality ledoux talagrand implies vec sup sup vec using standard comparisons rademacher guassian complexities see lemma bartlett mendelson exists vec sup vec cex sup independent standard normal random variables next upper bound gaussian complexity sup clearly definition earlier argument since therefore sup since sup finally need concentration bound show particular using talagrand theorem empirical processes talagrand construction vec var vec consequently talagrand inequality implies exp since claim follows setting finally return main proof event follows easily max completes proof theorem proof results section section present proofs main results section deferring technical parts appendix proof lemmas prove three lemmas together since proofs follow similar argument first let denote directions sparsity applied denote total dimension directions example lemma lemma lemma recall note represented variational form sup kvec kvkf express supremum gaussian process sup vec vec recall matricization involving either slice fiber remainder proof follows lemma appendix proof lemma recall max lemma appendix satisfies concentration inequality applying standard bounds maximum functions independent gaussian random variables max completes proof log proof lemma using standard nuclear norm upper bound matrix terms rank frobenius norm rank rank rank final inequality follows inequality finally note rank completes proof proof lemma note kgks directly apply lemma appendix proof lemma tucker decomposition clear find sets vectors addition hard see kuk kvk kwk hand shown yuan zhang kuk kvk kwk claim follows application inequality proof lemma recall considering regularizer max goal upper bound max kmk apply lemma appendix matricization implies max proof lemma hard see max completes proof proof results section section prove results section first provide general minimax lower result apply main results let arbitrary subspace tensors theorem assume holds exists finite set tensors log min max kte probability least proof use standard techniques developed ibragimov minskii extended yang barron let set let random variable uniformly distributed index set use standard argument allows provide minimax lower bound terms probability error multiple hypothesis testing problem see yang barron yields lower bound inf sup inf infimum taken estimators measurable functions let using fano inequality see cover thomas estimator log log taking expectations sides log log let denote condition distribution conditioned event dkl denote divergence convexity mutual information see cover thomas upper bound dkl given linear gaussian observation model nka dkl holds based construction exists set log holds conclude earlier bound due fano inequality log log guaranteed proof completed log log proof theorem proof upper bound follows directly lemma noting overall covariance ndm ndm since samples independent hence blocks prove lower bound use theorem construct suitable packing set way construct packing construct two separate packing sets select set higher packing number using similar argument used raskutti also uses two separate packing sets first packing set consider involves selecting slice consider vectorizing slice vec rsm hence order apply theorem define set slices isomorphic vector space rsm using lemma appendix exists packing set rsm log choose theorem implies lower bound min max kte probability greater second packing set construct slice since third direction packing number slice analogous packing number vectors ambient dimension letting need construct packing set kvk using lemma appendix exists discrete set log log setting log log min max kte probability greater taking maximum lower bounds involving packing sets completes proof lower bound theorem proof theorem upper bound follows directly lemma noting overall covariance ndm ndm since samples independent blocks prove lower bound use theorem construct suitable packing set construct two separate packings choose set leads larger minimax lower bound first packing set construct packing along one slice let assume rank let using lemma appendix exists set log crm set therefore using theorem min max kte probability greater second packing set involves packing space singular values since rank let singular values matrix rank constraint let vec note implies kvk using lemma exists set log log set log therefore using theorem log min max kte probability greater hence taking maximum bounds max log log max log min max kte probability greater proof theorem upper bound max log follows directly lemma satisfied according prove lower bound similar proof lower bound theorem use theorem construct two suitable packing sets first packing set consider involves selecting arbitrary subspace let vec comes vector space using lemma appendix exists packing set rsp log csp choose theorem implies lower bound min max kte probability greater second packing set construct slice since second third direction consider vector space kvk using standard standard hypercube construction lemma appendix exists discrete set log log setting log yields log min max probability greater taking maximum lower bounds involving packing sets completes proof lower bound proof theorem upper bound follows slight modification statement lemma particular since dual norm max hence following technique used lemma max max max also straightforward see prove lower bound construct three packing sets select one largest packing number recall max rank therefore three packings assuming rank focus packing since approach similar two cases using lemma appendix combination theorem min min max kte probability greater repeating process packings assuming rank taking maximum three bounds yields overall minimax lower bound max min max kte probability greater references agarwal negahban wainwright noisy matrix decomposition via convex relaxation optimal rates high dimensions annals statistics anderson integral symmetric convex set probability inequalities proc american mathematical society anderson introduction multivariate statistical analysis wiley series probability mathematical statistics wiley new york bartlett mendelson gaussian rademacher complexities risk bounds structural results journal machine learning research basu michailidis regularized estimation sparse time series models annals statistics bhatia matrix analysis springer new york buhlmann van geer statistical data springer series statistics springer new york chen lyu king exact stable recovery pairwise interaction tensors advances neural information processing systems cohen collins tensor decomposition fast parsing pcfgs advances neural information processing systems cover thomas elements information theory john wiley sons new york dasgupta gupta empirical processes bounded diameter geometric functional analysis gandy recht yamada tensor completion rank tensor recovery via convex optimization inverse problems gordon milmans inequality random subspaces escape mesh geometric aspects functional analysis israel seminar lecture notes hastie tibshirani wainwright statistical learning sparsity lasso generalizations monographs statistics applied probability crc press new york hoff multilinear tensor regression longitudinal relational data technical report department statistics university washington ibragimov minskii statistical estimation asymptotic theory springerverlag new york koldar bader tensor decompositions applications siam review laurent massart adaptive estimation quadratic functional model selection annals statistics ledoux concentration measure phenomenon mathematical surveys monographs american mathematical society providence ledoux talagrand probability banach spaces isoperimetry processes new york tensor completion compression hyperspectral images ieee international conference image processing icip pages new introduction multiple time series analysis springer new york massart concentration inequalties model selection ecole springer new york shahar mendelson upper bounds product multiplier empirical processes technical report technion mesgarani slaney shamma audio classification based multiscale features ieee transactions speech audio processing huang wright goldfarb square deal lower bounds improved relaxations tensor recovery international conference machine learning negahban wainwright estimation near matrices noise scaling annals statistics negahban wainwright restricted strong convexity weighted matrix completion jmlr negahban ravikumar wainwright unified framework highdimensional analysis decomposable regularizers statistical science pisier volume convex bodies banach space geometry volume cambridge tracts mathematics cambridge university press cambridge pollard convergence stochastic processes new york qin scheinberg goldfarb efficient descent algorithms group lasso math program raskutti wainwright restricted eigenvalue conditions correlated gaussian designs journal machine learning research raskutti wainwright minimax rates estimation linear regression ieee transactions information theory raskutti wainwright rates sparse additive models kernel classes via convex programming journal machine learning research rendle pairwise interaction tensor factorization personalized tag recommendation icdm rendle marinho nanopoulos learning optimal ranking tensor factorization tag recommendation sigkdd rockafellar convex analysis princeton university press princeton semerci hao kilmer miller tensor based formulation nuclear norm regularizatin multienergy computed tomography ieee transactions image processing sidiropoulos nion tensor algebra harmonic retrieval signal processing mimo radar ieee transactions signal processing simon friedman hastie blockwise coordinate descent algorithm penalized multiresponse grouped multinomial regression technical report georgia november slepian barrier problem gaussian noise bell system tech talagrand new concentration inequalities product spaces invent tibshirani regression shrinkage selection via lasso journal royal statistical society series turlach venables wright simultaneous variable selection technometrics van geer empirical processes cambridge university press van geer weakly decomposable regularization penalties structured sparsity scandivanian journal statistics theory applications yang barron determination minimax rates convergence annals statistics assouad fano cam research papers probability statistics festschrift honor lucien cam pages yuan lin model selection estimation regression grouped variables journal royal statistical society yuan zhang tensor completion via nuclear norm minimization foundation computational mathematics appear shuheng zhou restricted eigenvalue conditions subgaussian random matrices technical report eth zurich results gaussian random variables section provide standard concentration bounds use throughout paper first provide standard tail bounds due laurent massart laurent massart lemma let centralized random variable degrees freedom exp exp gaussian comparison inequalities first result classical result anderson lemma anderson comparison inequality let gaussian random vectors covariance respectively positive convex symmetric set following lemma slepian inequality slepian allows upper bound supremum one gaussian process supremum another gaussian process lemma slepian lemma let two centered gaussian processes defined index set suppose processes almost surely bounded sup sup finally require standard result concentration lipschitz functions gaussian random variables theorem theorem massart let gaussian random variable function lkx exp suprema gaussian tensors section provide important results suprema gaussian tensors different sets group norm let gaussian matrix define set kuk kvk using notation let define define random quantity sup following overall bound lemma log proof proof user similar ideas proof theorem raskutti need upper bound taking supremum gaussian process sup kuk kvk set apply slepian inwe construct second gaussian process equality see lemma appendix upper bound sup kuk kvk particular let define supremum second gaussian process process vectors standard normals also independent straightforward show straightforward show var show var end observe var kuv kuk kvk first note inequality kvk kuk therefore var consequently using lemma sup sup kuk kvk therefore sup sup kuk kvk sup sup kuk kvk kgk khk known results gaussian maxima see ledoux talagrand khk log kgk therefore log spectral norm tensors proof based extension proof techniques used proof proposition negahban wainwright lemma let random sample gaussian tensor ensemble kgks log proof recall definition kgks kgks sup since entry gaussian random variable kgks supremum gaussian process therefore concentration bound follows theorem ledoux use standard covering argument upper bound kgks let covering number sphere terms vector similarly therefore let covering number sphere ujn ujn taking supremum sides kgks max ujn kgks repeating argument directions kgks max ujnn construction variable gaussian variance standard bounds gaussian maxima kgks log log log exist log log completes proof hypercube packing sets section provide important results lower bound results one key concept hamming hamming distance two vectors defined lemma let exists discrete subset log constant proof let member hypercube recall definition hamming distance provided case amounts places either negative negative according lemma exists subset hypercube log clearly completes proof next provide hupercube packing set sparse subset vectors set kvk follows lemma raskutti state completeness lemma let exists discrete subset log log finally present packing set result lemma agarwal packs set matrices lemma let min let min exists set matrices cardinality log min constant | 10 |
stochastic power system simulation using adomian decomposition method nan duan student member ieee kai sun senior member ieee abstract dynamic security assessment considering uncertainties grid operations paper proposes approach simulation power system stochastic loads proposed approach solves stochastic differential equation model power system way using adomian decomposition method approach generates solutions expressing deterministic stochastic variables explicitly symbolic variables embed stochastic processes directly solutions efficient simulation analysis proposed approach tested new england system different levels stochastic loads approach also benchmarked traditional stochastic simulation approach based eulermaruyama method results show new approach better time performance comparable accuracy index decomposition method stochastic differential equation stochastic load stochastic simulation introduction ncertainties exist operations power grids many factors random load consumptions unanticipated relay protection actions contribute randomness grid operations foreseen future power grid uncertainties stochastic behaviors system operations due increasing penetrations responsive loads intermittent renewable generations thus dynamic security assessment dsa power systems conducted deterministic stochastic manners however today power system simulation software tools still based solvers deterministic equations daes involve stochastic variables model uncertainties system operating conditions literature three major approaches modeling dynamic system stochastic effects shown fig master equation equation gillespie method master equation equation widely applied field computational biology focus evolution probability distribution gillespie method focuses individual stochastic trajectories first two approaches provide comprehensive understanding stochastic effects dynamic system require solving work supported nsf grant nan duan kai sun department eecs university tennessee knoxville nduan kaisun high dimensional partial differential equations computationally difficult applied simulations realistic power systems works using gillespie method power system simulation stochastic modeling master equation multiple runs gillespie algorithm fokkerplanck equation multiple runs eulermaruyama method adomian decomposition method fig stochastic modeling approaches recent years researchers contributed power system simulation manner reference proposed systematic method simulate system behaviors influence stochastic perturbations loads bus voltages rotor speeds approach introduces stochastic differential equations sdes represent stochastic perturbations solves equations ito calculus mean trajectory envelope trajectory variations yielded repeating simulations many times papers utilize similar approach study power system stability random effects analyze long term stability power system wind generation new sde model developed also applies singular perturbation theory investigate slow dynamics system stochastic wind generation however time performance approach based method hardly meet requirements online power system simulation especially penetration distributed energy resources ders reaches high level distribution network behaves stochastic manner seen transmission network hence large number sdes need included power system model significantly influence simulation speed also nature gillespie method requires large number simulations model yield mean trajectory well envelope variations therefore adding extra sde existing set sdes result multiplying computing time factor hundreds even thousands previous works new approach power system simulation proposed approach applies adomain decomposition method adm power system daes derive solution sas state variable explicit function symbolic variables including time initial system state selected parameters system condition function evaluated plugging values symbolic variables consecutive small time windows make desired simulation period obtain simulated trajectory state variable since form every sas summation finite terms approximation evaluation fast parallelized among terms thus compared traditional numerical integration based power system simulation approach decomposes computation offline derivation online evaluation sas better fit online power system simulation parallel computing environment fact approach also suggests viable alternative paradigm fast stochastic simulation example early works adomian utilized adm solve nonlinear sdes embedding explicitly stochastic processes terms sas power system simulation stochastic manner paper proposes approach extension adm based approach proposed utilizing nature sas yielded adm new approach embeds stochastic model stochastic load model sas evaluation sas stochastic model whose parameters represented symbolically increase many computational burdens compared evaluation sas deterministic simulation thus expected number simulation runs one single case achieved evaluating one sas number times rest paper organized follows section presents sde model power system integrates stochastic loads section iii gives approach solving power system sdes stochastic simulation section uses smib system compare fundamental difference admbased approach approach mathematics section introduces criterion defining stability general stochastic dynamical system also applied power systems section validates proposed approach using ieee system stochastic loads compares results time performance approach finally conclusions drawn section vii power system sde model stochastic loads synchronous generator modeling power system synchronous generators consider model model generator saliency ignored generators coupled nonlinear algebraic equations network pek fdk iqk sin cos sin cos def pek iqk idk iqk sin cos idk sin cos idk iqk rated angular frequency respectively rotor angle rotor speed inertia damping coefficient machine kth row reduced admittance matrix column vector generator electromotive forces emfs kth element pmk pek mechanical electric powers efdk internal field voltage iqk idk xqk xdk transient voltages stator currents opencircuit time constants synchronous reactances transient reactances respectively stochastic load modeling stochastic model built based analysis real data assumptions probabilistic characteristics stochastic variables traditionally uncertainties loads power system ignored simulation sake simplicity however stochastic behaviors wellrecognized taking stochastic loads consideration enable realistic power system stability assessment paper uses process model stochastic variations load sdes yql white noise vector whose dimension equals number load buses parameters drifting diffusion parameters sdes operator hadamard product multiplication ypl yql stochastic variations normal distributions stochastic dynamic load therefore modeled yql mean values active reactive loads respectively periodicities autocorrelations observed historical data loads daily basis however time frame seconds loads different substations much lower autocorrelations refer paper sets drifting parameter autocorrelations loads iii proposed approach solving power system sdes modeling stochastic variables consider stochastic variables could stochastic loads following different distributions transformed function normal distribution example load represented normal distribution certain mean value specifies normal distribution shifts around desired mean value like process utilized generate next step apply inverse laplace transform sides calculate order sas sas resulting sas stochastic variables appear explicitly symbolic variables comparison approach proposed approach section applies approach proposed approach smib system stochastic load shown fig illustrate fundamental difference two approaches jxd jxl solving sdes using adm consider nonlinear system modeled sde deterministic state variables state variables generators exciters speed governors stochastic variables solve procedure used first apply laplace transformation obtain use calculate adomian polynomials assumption recursive formulas derived matching terms stochastic load connected generator bus resistance reactance modeled stochastic variables thus whole system modeled des sdes cos sin jbl jbs jbr fig smib system constant impedance load generator bus since change stochastically treated constants variances depend values drifting parameters diffusion parameters respectively find sas system first step apply adm des sas system des derived sas sdes derived incorporated instance order sas rotor speed therefore infinite order sas dsi apply maclaurin expansion exponential function lemma solution becomes cos sin cos order sas derived using adm apply integration parts formula cos sin cos sin forms sdes analytical solution may exist incorporated des sas directly derive sas entire system example general expression sas terms written dsn close form solution found case symbolic variable replaced instead hand approach since deterministic model described permit close form solution sample trajectories numerically computed numerical scheme shown scheme also applies practice value dependent step size integration brownian motion starting origin similarly order sas derive sas entire system considering des sdes replace symbolic variables des sas representing stochastic variables sdes sas order sas system derived replacing symbolic variables sas stability stochastic systems variety definitions stability stochastic dynamical system literature definition asymptotic stability probability directly applied power system stochastic variables definition counterpart asymptotic lyapunov stability deterministic system definition stability probability equilibrium point said stable probability given exists xeq whenever definition asymptotic stability probability equilibrium point said asymptotic stable probability stable probability given exists lim whenever analyzed stability numerical simulation results paper modifies stability accessed using results finite time period simulations xeq predefined time instant small positive number case studies proposed approach tested ieee new england system shown fig selected loads assumed change stochastically generators represented deterministic models case study stochastic simulation result eulermaruyama approach used benchmark order sass used evaluated every value stochastic variable changed every case sample trajectories generated fault applied cases fault bus cleared tripping line simulations performed matlab desktop computer intel core cpu ram result approach result approach fig simulation results generator rotor angle loads connecting bus represented stochastic variable load variation simulation results deterministic system response indicated mean value asymptotically stable use stochastic system stability definition introduced section loads buses small variances system behaves similar deterministic system asymptotically stable probability fig ieee system stochastic loads low variances first case model loads buses system load process variances loads mean values results approach approach shown fig among generators generator shortest electrical distance bus hence rotor angle presented following results stochastic loads low variances second case extend stochastic loads buses variances equal mean values shown fig simulation results two approaches agree reveal less stable system response due increased uncertainties system loads stochastic system asymptotically stable probability compared first case two stochastic loads value probability system asymptotically stable reduces therefore percentage stochastic loads increases even though load uncertainties low equilibrium point system almost deterministic model asymptotic stability system probability downgrades justifies necessity using stochastic load models study stability power systems high penetration stochastic loads result approach result approach fig simulation results generator rotor angle loads represented stochastic variable load variation third case loads represented stochastic loads variances loads increased mean values case may represent scenario ders widely deployed distribution networks make aggregated bus load seen transmission subtransmission substation behave stochastically simulation results approach eulermaruyama approach shown fig approach agrees approach simulation results show system loses stability variance loads increases mean values instability due cumulative effect stochastic load variations confidence envelope utilized indicator system stability unlike fig confidence envelope fig bounded indicating probability system losing stability approach stochastic loads high variances approach fig simulation results bus voltage bus loads represented stochastic variable load variation result approach result approach fig simulation results generator rotor angle loads represented stochastic variable load variation bus voltages also reflect impact high load uncertainties shown fig voltage magnitude bus denoted loads high uncertainties system increased risk issues imbalance generation load magnified increased load uncertainties also indicates importance stochastic power system simulation penetration ders becomes high results stochastic power system simulation probability distribution function pdf system variable evolves time period estimated fit anticipated probability distribution analysis example assume follow normal distribution time instant mean value variance varying time fig shows evolutions pdf using simulation results approach approach comparison fig basically matches fig indicating accuracy proposed approach reflecting evaluation pdf time elapses pdf bus voltage shifts mean value also increases variance indicated increasing width shape information available deterministic power system simulation longer system subjected effect stochastic variables bigger variance larger uncertainty system dynamics fig mean value generator rotor angle case fig standard deviation generator rotor angle case loads modeled stochastic variance state variables grows accordingly mean value standard deviation rotor angle generator case shown fig fig case standard deviation reaches largest value first swing larger largest standard deviation case approach fig mean value generator rotor angle case approach fig evolution pdf voltage magnitude bus variances state variables compare accuracy numerical results approach approach mean value standard deviation trajectories compared case shown fig fig admbased approach achieves comparable accuracy eulermaruyama approach terms mean value standard deviation value fig standard deviation generator rotor angle case comparison time performances time performances cases admbased approach approach compared table approach takes less time cost approach advantage approach time performance prominent many simulation runs required discussed approach inherently suitable parallel implementation could help improve time performance parallel computers available table time performance comparison stochastic load cases time costs ito calculus single run ito calculus runs adm single run adm runs stochastic loads buses case stochastic loads buses case vii conclusion paper proposes alternative approach stochastic simulation power systems using sas derived adm stochastic effects load uncertainties taken considerations result proposed approach benchmarked approach since evaluation sass faster integration approach proposed approach obviously advantage time performance critical large number simulation runs need performed simulating stochastic behaviors future power grid high penetration ders simulation results different levels stochastic loads show level load uncertainty low deterministic simulation still trustworthy compared trajectory stochastic simulation level load uncertainty becomes high trajectory longer represents true behavior system viii references hiskens alseddiqui sensitivity approximation uncertainty power system dynamic simulation ieee trans power systems vol tatari dehghan razzaghi application adomian decomposition method equation math comput modelling vol mar spencer bergman numerical solution fokkerplanck equation nonlinear stochastic systems nonlinear dynamics vol saito mitsui simulation stochastic differential equations ann inst stat vol higham algorithmic introduction numerical simulation stochastic differential equations soc ind appl math review wang crow fokker planck equation application analysis simplified wind turbine model north american power symposium champaign milano systematic method model power systems stochastic differential algebraic equations ieee trans power systems vol wang stochastic model power system transient stability wind power ieee pes general meeting national harbor wang crow numerical simulation stochastic differential algebraic equations power system transient stability random loads ieee pes general meeting detroit yuan zhou zhang stochastic small signal stability power system wind power generation ieee trans power systems vol jul wang chiang wang liu wang stability analysis power systems wind power based stochastic differential equations model development foundations ieee trans sustainable energy vol duan sun application adomian decomposition method solutions power system differential algebraic equations ieee powertech eindhoven netherlands duan sun finding solutions power system equations fast transient stability simulation arxiv preprint duan sun power system simulation using multistage adomian decomposition method ieee trans power systems adomian nonlinear stochastic differential equations math anal vol sun kang optimal pmu placement power system dynamic state estimation using empirical observability gramian ieee trans power systems vol jul galiana handschin fiechter identification stochastic electric load models physical data ieee trans automat control vol sauer numerical solution stochastic differential equations finance handbook mathematical functions springer berlin heidelberg nouri study stochastic differential equations via modified adomian decomposition method sci series vol cao liu fan method stochastic differential delay equations appl math vol hutzenthaler jentzen kloeden strong convergence explicit numerical method sdes lipschitz continuous coefficients ann appl probability vol adomian review decomposition method applied mathematics math anal vol thygesen survey lyapunov techniques stochastic differential equations dept math modelling tech univ denmark lyngby denmark imm technical report mao stochastic differential equations applications edition chichester horwood burrage burrage mitsui numerical solutions stochastic differential equations implementation stability issues computational appl vol kozin survey stability stochastic systems automatica vol | 3 |
magnifyme aiding cross resolution face recognition via identity aware synthesis maneet singh shruti nagpal richa singh mayank vatsa angshul majumdar india feb maneets shrutin rsingh mayank angshul abstract enhancing low resolution images via image synthesis face recognition well studied several image processing machine learning paradigms explored addressing research propose synthesis via deep sparse representation algorithm synthesizing high resolution face image low resolution input image proposed algorithm learns sparse representation high low resolution gallery images along identity aware dictionary transformation function two representations face identification scenarios low resolution test data input high resolution test image synthesized using identity aware dictionary transformation used face recognition performance proposed sdsr algorithm evaluated four databases including one real world dataset experimental results comparison existing seven algorithms demonstrate efficacy proposed algorithm terms face identification image quality measures low resolution face image bicubic interpolation low resolution face image high resolution gallery image bicubic interpolation figure images captured minutes boston marathon bombing suspect dzhokhar tsarnaev circled resolution circled image less interpolated covariate face recognition widespread applications several researchers shown performance sota algorithms reduces matching face images order overcome limitation intuitive approach generate high resolution image given low resolution input provided input face recognition engine figure shows sample real world image captured boston bombing since person interest distance face captured thus low resolution upon performing bicubic interpolation obtain high resolution image results image suffering blur poor quality ultimate aim high recognition performance generated high resolution image good quality preserving identity subject elaborated next subsection exist multiple synthesis super resolution techniques hypothesize utilizing domain model face synthesis result improved recognition performance especially recognition scenarios effect work presents novel domain specific identity aware synthesis via deep sparse coding algorithm synthesizing high resolution face image given low resolution input image introduction group images often captured distance order capture multiple people image cases resolution face image relatively smaller thereby resulting errors automated tagging similarly surveillance monitoring applications cameras often designed cover maximum field view often limits size face images captured especially individuals distance use images match high resolution images profile images social media mugshot images captured law enforcement resolution gap two may lead incorrect results task matching low resolution input image database high resolution images referred cross resolution face recognition challenging literature review literature different techniques proposed address problem cross resolution face recognition broadly divided transformation based techniques based techniques transformation based techniques address resolution difference images explicitly introducing transformation function either image feature level techniques propose resolution invariant features classifiers order address resolution variations wang present exhaustive review proposed techniques addressing cross resolution face recognition peleg elad propose statistical model uses minimum mean square error estimator high low resolution image pair patches prediction lam propose singular value decomposition based approach super resolving low resolution face images researchers also explored domain representation learning address problem cross resolution face recognition yang propose learning dictionaries low high resolution image patches jointly followed learning mapping two yang propose sparse classification approach face recognition hallucination constraints solved simultaneously propose convolutional sparse coding image divided patches filters learned decompose low resolution image features mapping learned predict high resolution feature maps low resolution features mundunuri biswas propose scaling stereo cost technique learn common transformation matrix addressing resolution variations parallel area research research focused obtaining high resolution image given low resolution image objective visual quality input significant advancement field past several years including recent representation learning architectures proposed important note techniques utilized addressing cross resolution face recognition however often explicitly trained face images providing results research contributions research focuses cross resolution face recognition proposing image synthesis algorithm capable handling large magnification factors propose deep sparse representation based transfer learning approach termed synthesis via deep sparse representation sdsr proposed identity aware thesis algorithm incorporated module prior existing face recognition engine enhance resolution given low resolution input order ensure synthesis proposed model trained using gallery database single image per subject results demonstrated four databases effectiveness evaluated terms image quality measure synthesized images face identification accuracies existing face recognition models synthesis via deep sparse representation dictionary learning algorithms inherent property representing given sample sparse combination basis functions property utilized proposed sdsr algorithm synthesize high resolution image given low resolution input proposed model learns transformation representations low high resolution images instead interpolating pixel values work focuses interpolating abstract representation motivated abstraction capabilities deep learning propose learn transformation deeper levels representation unlike traditional dictionary learning algorithms propose learn transformation deeper levels representation leads key contribution work synthesis via deep sparse representation sdsr transfer learning approach synthesizing high resolution image given low resolution input preliminaries let input training data samples dictionary learning algorithms learn dictionary sparse representations using data objective function dictionary learning written min sparse codes represents regularizing constant controls much weight given induce sparsity representations first term minimizes reconstruction error training samples second term regularization term sparse codes literature researchers proposed extending single level dictionary dictionary learn multiple levels representations given data deep dictionary learns dictionaries sparse coefficients given input min architecture deep dictionary inspired deep learning techniques deeper layers feature learning enhance level abstraction learned network thereby learning meaningful latent variables real world scenarios surveillance image tagging task match low resolution test images probe database high resolution images known gallery images without loss generality assume target comprises high resolution gallery images source domain consists low resolution images proposed model low resolution face images high resolution face images level deep dictionaries learned source gkl target domain gkh important note dictionaries generated using preacquired gallery images corresponding sparse representations akl akh also learned levels akh representations learnt corresponding high tion deep dictionary akl representations learnt level dictionary low resolution images proposed algorithm learns transformation akh akl optimization formulation synthesis via deep sparse representation sdsr deep dictionary written gkh min gkl regularization parameters control amount sparsity learned representations layer regularization constant learning transformation function correspond deep dictionaries learned high low resolution gallery images respectively sdsr algorithm learns multiple levels dictionaries corresponding representations low high resolution face images along transformation features learned deepest layer training sdsr algorithm without loss generality training proposed sdsr algorithm explained shown figure since number variables large even deeper dictionaries directly solving optimization problem may provide incorrect estimates lead overfitting therefore greedy layer layer training applied important note since regularizer coefficients first second layer dictionaries collapsed one dictionary order learn estimates split learning first level representation second level representation transformation optimization function two level deep dictionary follows min assuming intermediate variable equation modeled optimization following two equations min min sdsr algorithm two level deep dictionary written min deep dictionary two levels requires two steps learning upon extending formulation level deep dictionary would require exactly steps optimization proposed sdsr algorithm builds upon utilizes steps based greedy learning level deep dictionary steps learning representations using deep dictionary architecture step learning transformation final representations therefore solved using independent three step approach learn first level source low resolution target high resolution domain dictionaries learn second level low high resolution image dictionaries iii learn transformation final representations using concept first step two separate dictionaries learned given input data low resolution high resolution face images independently given training data consisting low high resolution face learn sparse representations learn sparse representations high resolution gallery images level dictionary level sparse representations level dictionary level sparse representations learn sparse representations learn sparse representations low resolution gallery images level dictionary level dictionary level sparse representations xltest learn transformation using transformation level sparse representations face recognition engine test test test test xhtest figure synthesis via deep sparse representation algorithm deep dictionary refers training model illustrates high resolution synthesis low resolution input images following minimization applied two domains respectively min min refer sparse codes learned low high resolution images respectively two equations optimized independently using alternating minimization dictionary learning technique dictionary representation step dictionaries representations obtained two varying resolution data second step deep dictionary created learning second level dictionaries using representations obtained first level two separate dictionaries one low resolution images one high resolution images learned using representations obtained first level input features equations step written follows min min final tation obtained low resolution images refers representation obtained high resolution images similar previous step equations solved independently using alternating minimization dictionary representations step obtained order synthesize one resolution another third step algorithm involves learning transformation deep representations two resolutions following minimization solved obtain transformation min equation least square problem closed form solution training dictionaries transformation function obtained used test time testing synthesizing high resolution face image low resolution image testing low resolution test image xtest input algorithm using trained gallery based dictionaries first second level representations test test obtained given image xtest test test test transformation function learned used obtain second level high resolution representation test test test table summarizing characteristics training testing partitions databases used experiments dataset cmu real world scenarios scface training subjects training images testing subjects testing images gallery resolution probe resolutions probe resolution original image bicubic interp dong kim dong peleg yang proposed sdsr cmu multipie scface table identification accuracies obtained using verilook cross resolution face recognition target resolution algorithms support required magnification factor presented using second level representation given image target domain synthesized output given image obtained first test calcuh lated help xtest obtained using synthesized image target domain test test test xtest important note synthesized high resolution image sparse combination basis functions learned high resolution dictionary order obtain good quality high resolution synthesis dictionary trained high resolution database ensures basis functions trained dictionaries span latent space images demonstrated via experiments well key highlight algorithm learn good quality representative dictionaries single sample per subject well high resolution synthesized output image xtest used face identification engine recognition databases experimental protocol effectiveness proposed sdsr algorithm demonstrated evaluating face recognition performance original synthesized images two face recognition systems cots verilook luxand used four different face databases verilook face quality confidence thresholds set minimum order reduce enrollment errors performance proposed algorithm compared six recently proposed synthesis techniques kim kernel ridge regression peleg sparse representation based statistical prediction model convolutional sparse coding yang dictionary learning dong deep convolutional networks dong deep convolutional networks along one popular technique bicubic interpolation results existing algorithms computed using models provided authors links provided footnotes noted algorithms support levels magnification instance algorithm proposed kim supports levels magnification whereas yang algorithm supports levels magnification face databases table summarizes statistics databases terms training testing partitions along resolutions details databases provided cmu dataset images pertaining https http http http http http subjects selected frontal pose uniform illumination neutral expression subjects used training remaining test set dataset consists face images subjects subjects single normal image dataset contains images different covariates lighting expression distance research normal images used high resolution gallery database face images distance covariate downsampled used probe images scface dataset consists subjects one high resolution frontal face image multiple low resolution images captured three distances using surveillance cameras real world scenarios dataset contains images seven subjects associated london bombing boston bombing mumbai attacks subject one high resolution gallery image multiple low resolution test images test images captured surveillance cameras collected multiple sources internet since number subjects seven order mimic real world scenario gallery size increased create extended gallery subjects images human identification meds datasets used protocol datasets real world matching protocol followed subject multiple low resolution images used probe images matched database high resolution gallery images single high resolution image per subject used gallery proposed comparative algorithms used synthesize high resolution image given low resolution input magnification factor varies probes probes match gallery database size databases except scface test images sizes varying scface database predefined protocol followed probe resolutions face detection performed using face provided using viola jones face detector synthetic downsampling performed obtain lower resolutions experiments performed five times random ensure consistency implementation details sdsr algorithm trained using gallery database dataset regularization constant sparsity kept different dictionaries different dimensions based input data instance dictionaries created scface dataset contain atoms first second dictionary respectively source code input bicubic proposed input bicubic proposed figure sample images scface dataset incorrectly synthesized sdsr algorithm input algorithm made publicly available order ensure reproducibility proposed approach results analysis proposed algorithm evaluated three sets experiments face recognition performance resolution variations image quality measure iii face identification analysis different dictionary levels resolution gallery set first experiment probe resolution varies fixed next two experiments face recognition across resolutions datasets resolutions results tabulated tables key observations pertaining set experiments presented probe resolutions except bicubic interpolation none existing super resolution synthesis algorithms used comparison support magnification factor therefore results two resolutions compared original resolution probe used input cots without resolution enhancement bicubic interpolation shown third fourth columns two tables cmu databases matching original bicubic interpolated images results accuracy whereas images synthesized using proposed algorithm provide accuracy respectively probe resolutions shown table cmu databases test resolution synthesized images obtained using proposed sdsr algorithm yield accuracy approaches yield accuracy less except bicubic interpolation size provides accuracy shown table similar performance trends observed using two databases scface accuracy sdsr significantly higher existing approaches however due challenging nature database commercial matchers provide low accuracies fig presents sample images scface dataset incorrectly synthesized via proposed sdsr algorithm varying acquisition devices training testing partitions along covariates pose illumination creates problem challenging probe resolution original image bicubic interp dong kim dong peleg yang proposed sdsr cmu scface table identification accuracies obtained using luxand cross resolution face recognition target resolution algorithms support required magnification factor presented caspeal cmu real world scface figure probe images corresponds original probe correspond different techniques bicubic interpolation kim dong dong proposed sdsr algorithm probe resolution using proposed algorithm achieves improved performance techniques except cmu dataset perform well databases proposed algorithm yields best results upon analyzing tables clear proposed algorithm robust different recognition systems performs well without bias specific kind recognition algorithm another observation images superresolved using bicubic interpolation yield best results first two databases however noted results observed magnification factor images synthetically real world surveillance datasets scface proposed approach performs best commercial systems real world scenarios dataset table summarizes results real world scenarios dataset since gallery contains images subjects marize results terms identification performance top retrieved matches interesting observe test resolutions proposed algorithm significantly outperforms existing approaches sdsr achieves identification accuracy probe resolution accuracy test resolution cross dataset experiments sdsr algorithm trained cmu dataset tested scface dataset probe resolution identification accuracy obtained using whereas identification accuracy obtained respectively results showcase proposed model still able achieve better recognition performance compared techniques however drop accuracy strengthens hypothesis using model performing synthesis beneficial achieving higher classification performance quality analysis fig shows examples images multiple databases generated using proposed existing algorithms figure images synthesized low resolution images observed output images obtained using existing algorithms columns artifacts terms blockiness blurriness however quality images obtained using proposed algorithm column significantly better algorithms compare visual quality outputs reference image quality measure brisque utilized image spatial quality evaluator brisque computes distortion image using statistics locally normalized luminance coefficients calculated spatial domain used estimate losses naturalness image lower value less table real world scenarios recognition accuracy obtained top ranks gallery subjects using verilook resolution probe resolution original image bicubic interpolation dong kim dong peleg yang proposed sdsr table average reference quality measure brisque probe resolution synthesized obtained five folds lower value brisque corresponds lesser distortions image database bicubic interp dong kim dong proposed sdsr cmu scface real world table accuracies varying levels sdsr algorithm probe gallery database cots cmu verilook luxand verilook luxand verilook luxand scface dictionary levels distorted image table seen images obtained using proposed sdsr algorithm better lower brisque score compared images generated existing algorithms difference least points observed brisque scores effect dictionary levels explained algorithm section synthesis performed different levels deep dictionary varying values experiment performed analyze effect different dictionary levels identification performance proposed algorithm used synthesize high resolution images magnification factor input images size varying dictionary levels first level dictionary equivalent shallow dictionary learning whereas two three levels correspond synthesis deep dictionary learning table reports identification accuracies obtained two commercial matchers four databases results show proposed approach generally yields best results cases proposed approach yields better results generally abstraction capability deeper layers overfitting two effects deep learning based approaches table observe two datasets moderately sized therefore observe good results second layer third layer overfitting offsets abstraction hence see none marginal changes computational complexity deep dictionary features higher improvements accuracy consistent across databases hand paired results obtained shallow dictionary deep dictionary demonstrate statistical significance even confidence level verilook specifically single image synthesis dictionary requires requires requires conclusion key contribution research recognitionoriented module based dictionary learning algorithm synthesizing high resolution face image low resolution input proposed sdsr algorithm learns representations low high resolution images hierarchical manner along transformation representations two results demonstrated four databases test image resolutions ranging matching requires generating synthesized high resolution images magnification factor results computed terms image quality measure face recognition performance illustrate proposed algorithm consistently yields good recognition results computationally proposed algorithm requires less millisecond generating synthesized high resolution image showcases efficacy usability algorithm low resolution face recognition applications references luxand https verilook http baker kanade hallucinating faces ieee international conference automatic face gesture recognition pages bhatt singh vatsa ratha improving face matching using cotransfer learning ieee transactions image processing december dahl norouzi shlens pixel recursive super resolution ieee international conference computer vision dong loy tang image using deep convolutional networks ieee transactions pattern analysis machine intelligence dong loy tang accelerating superresolution convolutional neural network european conference computer vision pages springer flynn bowyer phillips assessment time dependency face recognition initial study international conference biometric person authentication pages founds orlans whiddon watson nist special database encounter dataset medsii national institute standards technology tech rep gao cao chen zhou zhang zhao chinese face database baseline evaluations ieee transactions systems man cybernetics part systems humans january grgic delac grgic scface surveillance cameras face database multimedia tools application february gross matthews cohn kanade baker image vision computing may zuo xie meng feng zhang convolutional sparse coding image ieee international conference computer vision december jian lam simultaneous hallucination recognition faces based singular value decomposition ieee transactions circuits systems video technology november kim kwon using sparse regression natural image prior ieee transactions pattern analysis machine intelligence june ledig theis huszar caballero cunningham acosta aitken tejani totz wang shi single image using generative adversarial network ieee conference computer vision pattern recognition lee battle raina efficient sparse coding algorithms advances neural information processing systems pages mittal moorthy bovik image quality assessment spatial domain ieee transactions image processing mudunuri biswas low resolution face recognition across variations pose illumination ieee transactions pattern analysis machine intelligence ngiam chen bhaskar koh sparse filtering advances neural information processing systems pages peleg elad statistical prediction model based sparse representations single image ieee transactions image processing june polatkan zhou carin blei daubechies bayesian nonparametric approach image superresolution ieee transactions pattern analysis machine intelligence rubinstein zibulevsky elad double sparsity learning sparse dictionaries sparse signal approximation ieee transactions signal processing march tariyal majumdar singh vatsa greedy deep dictionary learning corr thiagarajan ramamurthy spanias multilevel dictionary learning sparse representation images digital signal processing signal processing education meeting pages tong liu gao image using dense skip connections ieee international conference computer vision viola jones robust face detection international journal computer vision wang tao gao comprehensive survey face hallucination international journal computer vision wang zhang liang pan dictionary learning applications image synthesis ieee conference computer vision pattern recognition pages wang chang yang liu huang studying low resolution recognition using deep networks ieee conference computer vision pattern recognition june wang miao jonathan wan tang face recognition review visual computer yang wright huang image superresolution via sparse representation ieee transactions image processing november yang wei yeh wang recognition long distance low resolution face recognition hallucination international conference biometrics pages may | 1 |
ttp tool tumor progression johannes ivana krishnendu martin mar ist austria institute science technology austria klosterneuburg austria program evolutionary dynamics harvard university cambridge usa department mathematics harvard university cambridge usa department organismic evolutionary biology harvard university cambridge usa abstract work present flexible tool tumor progression simulates evolutionary dynamics cancer tumor progression implements branching process key parameters fitness landscape mutation rate average time cell division fitness cancer cell depends mutations accumulated input tool could fitness landscape mutation rate cell division time tool produces growth dynamics relevant statistics introduction cancer genetic disease driven somatic evolution cells driver mutations cancer increase reproductive rate cells different mechanisms evading growth suppressors sustaining proliferative signaling resisting cell death tumors initiated genetic event increases reproductive rate previously normal cells evolution cancer malignant tumor process cells need receive several mutations subsequently phase tumor progression characterized uncontrolled growth cells requirement accumulate multiple mutations time explains increased risk cancer age several mathematical models explain tumor progression age incidence cancer models also provided quantitative insights evolution resistance cancer therapy models tumor progression branching processes represent exponentially growing heterogeneous population cells key parameters process fitness landscape cells determine reproductive rate mutation rate determines accumulation driver mutations iii average cell division time generation time new cells fitness landscapes allow analysis effects interdependent driver mutations evolution cancer work present flexible tool namely ttp tool tumor progression study dynamics tumor progression input tool could fitness landscape mutation rate cell division time tool generates growth dynamics relevant statistics expected tumor detection time expected appearance time surviving mutants etc stochastic computer simulation efficient simulation multitype branching process possible fitness landscapes driver mutation rates cell division times tool provides quantitative framework study dynamics tumor progression different stages tumor growth currently data understand effects complex fitness landscapes obtained patients animals suffering disease tool playing parameters data reproduced computer simulations provide many simulation examples would aid understand complex effects moreover correct mathematical models specific types cancer identified simulations match data verification tools probabilistic systems used analyze understand tumor progression process approach followed verification biological models direction results specific fitness landscapes tool already used biological application paper present tool process provides good approximation process results tool special case uniform fitness landscape process also shown excellent agreement data time treatment failure colorectal cancer model tumor progression modeled branching process galtonwatson process time step cell either divide die phenotype cancerous cell determines division probability encoded bit string length four death probability follows cell divides one two daughter cells receive additional mutation bit flips wildtype mutated type probability one wildtype positions cells phenotype receive additional mutation positions two four cells phenotype receive additional mutations branching process initiated single cell phenotype resident cell resident cells wildtype four positions strictly positive growth rate fitness landscapes tool provides two predefined fitness landscapes driver mutations tumor progression multiplicative fitness landscape mfl path fitness landscape pfl additionally user also define general fitness landscape gfl fitness landscape defines birth probability possible phenotypes following convention standard modeling approaches let birth probability resident cells cells phenotype growth coefficient indicates selective advantage provided additional mutation position phenotype multiplicative fitness landscape mfl mutation position phenotype cell results multiplication birth probability specifically birth probability cell phenotype given sbj sbj position otherwise sbj hence additional mutation weighted differently provides predefined effect birth probability cell additional mutations also costly neutral modeled negative fitness landscape reduces model studied bozic call emfl equal multiplicative fitness landscape also predefined tool path fitness landscape pfl defines certain path additional mutations need occur increase birth probability cell predefined path growth coefficients determine multiplicative effect new mutation birth probability see appendix details mutations path deleterious growth rate cell birth probability set parameter specifies disadvantage cells phenotypes belong given path general fitness landscapes tool allows input fitness landscape follows tool take input value way fitness landscape parameter tool density limitation situations tumor needs overcome current geometric metabolic constraints tumor needs develop blood vessels provide enough oxygen nutrients growth growth limitations modeled density limit carrying capacity various phenotypes hence cells phenotype grow first exponentially eventually reach steady state around given carrying capacity cells another phenotype additional mutation overcome density limit logistic growth modeled variable growth coefficients sej current number cells phenotype tumor model initially sej however order sej becomes approximately zero details given appendix tool implementation experimental results tool provides efficient implementation general tumor progression model essentially tool implements defined branching processes simulate dynamics tumor growth obtain statistics expected tumor detection time appearance additional driver mutations different stages disease progression ttp downloaded http efficient processing branching process stochastic simulation samples multinomial distribution phenotype time step sample returns number cells divided without mutation number cells died current generation see appendix details samples phenotype program calculates phenotype distribution next generation hence program needs store number cells phenotype simulation efficient implementation branching process allows tool simulate many patients within second obtain good statistical results reasonable time frame number cells number cells tumor detection size cells time years time years probability density number cells emfl time years mfl path time years fig experimental results illustrating variability tumor progression panels show examples two particular simulation runs cells grow according emfl resident cells blue constrained carrying capacity panel cells grow according pfl panel show statistical results probability density tumor detection cells grow according different fitness landscapes parameter values growth coefficients mutation rate cell division time days tumor detection size cells modes tool run following two modes individual statistics individual mode tool produces growth dynamics one tumor patient see panels fig furthermore growth dynamics phenotype distribution tumor depicted graphically statistics mode tool produces probability distribution detection time tumor see panel fig graphically quantitatively additionally tool calculates phenotypes appearance times first surviving lineage existence probability average number cells detection time features ttp provides intuitive graphical user interface enter parameters model shows plots dynamics tumor progression phenotype distribution probability density tumor detection plots also saved files various image formats furthermore tool create data files values tumor growth history probability distribution tumor detection set input parameters details format given appendix input parameters modes tool takes following input parameters growth coefficients case pfl mutation rate iii cell generation time fitness landscape mfl pfl emfl gfl birth probability phenotype optional density limits phenotypes individual mode additionally user needs provide number generations simulated statistics mode additional parameters tumor detection size number patients tumors survive initial stochastic fluctuations simulated experimental results panels fig show examples growth dynamics tumor progression although used exactly parameters panels observe time tumor initiation detection different panel show probability density tumor detection various fitness landscapes experimental results given appendix case studies several results models shown excellent agreement different aspects data results expected tumor size detection time using emfl fit reported polyp sizes patients well similarly using branching process uniform fitness landscape results expected time relapse tumor start treatment agree thoroughly observed times patients future work ongoing work also investigate mathematical models tumor dynamics occurring cancer treatment modeled continuoustime branching process thus interesting extension tool would model treatment well another interesting direction model seeding metastasis tumor progression hence simulate full patient rather primary tumor alone faithful models evolution cancer identified verification tools prism theoretical results might contribute understanding processes acknowledgments work supported erc start grant graph games fwf nfn grant rise fwf grant microsoft faculty fellow award foundational questions evolutionary biology initiative john templeton foundation joint program mathematical biology nih grant references vogelstein kinzler cancer genes pathways control nature medicine hanahan weinberg hallmarks cancer next generation cell jones chen parmigiani diehl beerenwinkel antal traulsen nowak siegel velculescu kinzler vogelstein willis markowitz comparative lesion sequencing provides insights tumor evolution pnas nowak evolutionary dynamics exploring equations life belknap press harvard university press cambridge komarova sengupta nowak networks cancer initiation tumor suppressor genes chromosomal instability journal theoretical biology iwasa michor nowak stochastic tunnels evolutionary dynamics genetics nowak michor komarova iwasa evolutionary dynamics tumor suppressor gene inactivation pnas diaz williams kinde hecht berlin allen bozic reiter nowak kinzler oliner vogelstein molecular evolution acquired resistance targeted egfr blockade colorectal cancers nature bozic antal ohtsuki carter kim chen karchin kinzler vogelstein nowak accumulation driver passenger mutations tumor progression pnas sadot fisher barak admanit stern hubbard harel toward verified biological models computational biology bioinformatics transactions reiter bozic allen chatterjee nowak effect one additional driver mutation tumor progression evolutionary applications haccou jagers vatutin branching processes variation growth extinction populations cambridge university press kerbel tumor angiogenesis past present near future carcinogenesis march hinton kwiatkowska norman parker prism tool automatic verification probabilistic systems tacas etessami stewart yannakakis polynomial time algorithms multitype branching processesand stochastic grammars stoc appendix details tool ttp available download http tool implemented java runs operating systems run java virtual machine jvm version necessary libraries included tool features tool supports various features two running modes individual mode ttp simulates tumor growth dynamics given number generations plots growth dynamics time current phenotype distribution produced simultaneously plots saved full growth history cell types also stored format described section statistics mode ttp simulates given number patients parameters simultaneously shows probability density tumor detection given detection size cells correspond tumor volume approximately average tumor detection time average fraction resident cells detection also shown simulations patients simulated existence probability detection average number cells average appearance year first surviving cell phenotypes calculated shown new window addition tool shows number detected died tumors per year separate window data stored format described section installation implementation details ttp written java makes use several libraries tool requires java runtime environment jre version start ttp command line type java make sure permission execute mac invoking tool command line overcome security restrictions tool composed following components model implementation statistics thread graphical user interface plot generator model implementation core component tool efficient implementation branching process following bozic number cells phenotype next generation calculated sampling multinomial distribution prob dyi mik mki number cells give birth identical daughter cell denoted number cells die denoted number cells divide additional mutation given number cells mutated phenotype given mki general one define mutation matrix encode probabilities mki cell phenotype mutates cell phenotype case matrix defined sequential accumulation mutations cell phenotype receive additional mutation positions encoding wildtype bit flips allowed mutations allowed positions equally likely back mutations considered fitness landscapes tool supports four fitness landscapes additional driver passenger mutations mfl emfl iii pfl gfl principal driver mutations increase birth rate cell whereas passenger mutations effect cell birth rate tables present complete definition mfl pfl respectively definition emfl gfl given section table multiplicative fitness landscape additional mutations phenotype birth probability density limit tool allows separate carrying capacity phenotype gfl used beginning simulation growth coefficients calculated given phenotypes since density limiting effects based values technical detail sizes birth probability would fall equivalently would fall set statistics thread statistics thread handles simulation many identical branching processes obtain statistical results simulations run table path fitness landscape additional mutations phenotype birth probability separate thread gui keeps responsive user requests completing necessary simulations relevant results automatically generated stored execution directory tool graphical user interface graphical user interface gui component contains frames forms required functionality tool also handles user requests distributes components within gui plots tumor progression dynamics individual mode probability density tumor detection statistics mode displayed multiple screenshots gui shown section plot generator plot generation based free jfreechart library generation scalable vector graphics svg apache xml graphics library http used example plot generated tool shown figure data files ttp produces various data files used analysis processing data given values record one line text files listing show example data file generated statistics mode average results given comments start hash individual mode data file contains number cells phenotype generations fig example generated plot tumor growth dynamics listing generated data file statistics mode used growth mutation generation time days bin died cumul died generation detected cumul det tumors went average mutant appearance mutant appeared number mutant appeared number runs performed user manual ttp invoked command java tool started gui used operations see figure screenshot gui input parameters control panel tool takes main parameters tumor progression fitness landscape mutation rate cell division time one prespecified fitness landscapes mfl pfl emfl used relevant growth coefficients defined general fitness landscape used window appears selection gfl specific birth probabilities phenotypes defined add density limits specific phenotypes window appears density limit checked phenotype different density limit given indicates limitation phenotype obtain statistical results number patients number tumors surviving lineage tumor detection size number cells tumor detected need provided modes parameter values specified tool either run individual statistics mode simulate growth dynamics single tumor click new simulation tool runs individual mode number cell generations simulated tumor consists cells statistics mode started clicking obtain statistics tool simulates given number tumors reach detection size calculates relevant statistics output individual mode tool generates plots growth dynamics phenotype distribution simulation furthermore entire tumor growth dynamics phenotype stored data file plots saved png svg files plots stored folder charts execution directory tool statistics mode tool generates plot probability density tumor detection statistics appearance time mutants detection extinction year shown separate windows see figures screenshots generated statistics automatically saved data file see listing experimental results screenshots section present additional experimental results multiple screenshots tool table compare probability tumor detection fitness landscapes emfl mfl pfl average tool needs approximately simulate tumor cells dual core processor table cumulative probability tumor detection different fitness landscapes results averages runs parameter values growth coefficients mutation rate detection size generation emfl mfl pfl fig graphical user interface ttp individual mode fig graphical user interface ttp statistics mode fig statistical results average detection extinction year fig statistical results average appearance year existence probability number cells detection time | 5 |
composition gray isometries sierra marie lauresta virgilio sison institute mathematical sciences physics university philippines los college laguna vpsison vpsison abstract coding theory gray isometries usually defined mappings finite frobenius rings include ring integers modulo finite fields paper derive isometric mapping composition gray isometries image composition block code length homogeneous distance necessarily linear quaternary block code length lee distance introduction block code length ring set called codewords said linear necessarily free submodule completely determined matrix codes rings gained attention hammons kumar calderbank sloane discovered certain good peculiar nonlinear codes binary field viewed images linear codes integer ring gray map onto defined map isometry lee weight element equal hamming weight image hamming weight binary vector number nonzero components vector carlet introduced generalization ring integers modulo let positive integer element expansion image generalized gray map boolean function given identify boolean function binary word length simply listing values thus generalized gray map seen nonsurjective mapping image code order generalized gray map naturally extended set boolean functions obtain usual gray map generalized gray map denote takes onto set boolean functions give binary words even hamming weight methodology extend usual gray isometry bijective mapping onto table shows binary image element clearly lee weight element equal hamming weight binary image table isometric gray map restrict mapping follows table map apply following homogeneous weight extend coordinatewisely table shows image element generalized gray map expansion mapping weight preserving homogeneous weight element equal hamming weight image table isometric gray map results discussion take composition table shows quaternary image element table isometric map mapping weight preserving homogeneous weight element equal lee weight image extended naturally let linear block code length minimum homogeneous distance image set proposition set following properties iii necessarily linear block code length lee distance equal every codeword even lee weight illustrate consider linear block code generated matrix code codewords minimum hamming distance minimum homogeneous distance codewords generated information words respectively quaternary images whose superimposition code example also shows additive homomorphism conclusion recommedation paper offers simple way define isometric mappings general take code block code sufficient necessary conditions linearity determined extension construction galois rings inevitable references hammons kumar calderbank sloane linearity kerdock preparata goethals related codes ieee trans inform theory vol january carlet codes ieee trans inform theory vol july greferath schmidt gray isometries finite chain rings nonlinear ternary code ieee trans vol november | 7 |
video enhancement flow tianfan google research baian chen mit csail jiajun mit csail donglai wei harvard university nov william freeman mit csail google research frame interpolation input videos epic flow interp epic flow flow interp flow video denoising input noisy videos epic flow denoise epic flow flow denoise flow figure many video processing tasks temporal top video denoising bottom rely flow estimation many cases however precise optical flow estimation intractable could suboptimal specific task example although epicflow predicts precise movement objects flow field aligns well object boundaries small errors estimated flow fields result obvious artifacts interpolated frames like obscure fingers flow proposed work interpolation artifacts disappear similarly video denoising flow deviates epicflow leads cleaner output frame flow visualization based color wheel shown corner abstract many video processing algorithms rely optical flow register different frames within sequence however precise estimation optical flow often neither tractable optimal particular task paper propose taskoriented flow toflow flow representation tailored specific video processing tasks design neural network motion estimation component video processing component two parts jointly trained manner facilitate learning proposed toflow demonstrate toflow outperforms traditional optical flow three different video processing tasks frame interpolation video video also introduce video dataset video processing better evaluate proposed algorithm work done tianfan xue student mit csail introduction motion estimation key component video processing tasks like temporal frame interpolation video denoising video video processing algorithms use approach first estimate motion input frames register based estimated flow fields process registered frames generate final output therefore accuracy flow estimation greatly affects performance approaches however precise flow estimation challenging slow brightness constancy assumption many motion estimation algorithms rely may fail due variations lighting pose presence motion blur occlusion also many motion estimation algorithms involve solving optimization problem making inefficient applications example widely used epicflow algorithm takes seconds frame million pixels moreover motion estimation algorithms aim solve motion field matches actual objects motion however may best motion representation video processing figure shows example frame interpolation even though epicflow calculates precise motion field whose boundary fingers image interpolated frame based contains obvious artifacts due occlusion contrast using flow introduced work model generates better interpolation result though estimated motion field differs optical flow magnitude align object boundaries similarly video denoising although epicflow shown matches boundary girl hair frame denoised epicflow much noisier one flow suggests specific video processing tasks exist flow representations match actual object movement lead better results paper propose learn flow toflow performing motion analysis video processing jointly trainable convolutional network network consists three modules first one estimates motion fields input frames second one registers input frames based estimated motion fields third one generates target output registered frames three modules jointly trained minimize loss output frames ground truth unlike flow estimation networks flow estimation module framework predicts motion field tailored specific task frame interpolation video denoising trained together corresponding video processing module proposed toflow several advantages first significantly outperforms optical flow algorithms three video processing tasks second highly efficient taking input image resolution third learning unlabeled video frames evaluate toflow build video dataset video processing existing large video datasets like designed vision tasks like event classification videos often low resolutions significant motion blurs making less useful video processing evaluate video processing algorithms systematically introduce new dataset consists video clips higher downloaded build three benchmarks videos interpolation respectively hope dataset contribute future research video processing videos diverse examples contributions paper first propose toflow flow representation tailored specific https video processing tasks significantly outperforming standard optical flow second propose trainable video processing framework handle various tasks including frame interpolation video denoising video third also build video dataset video processing related work optical flow estimation dated back horn schunck optical flow algorithms sought minimize energy terms image alignment flow smoothness current methods like epicflow flow exploit image boundary segment cues improve flow interpolation among sparse matches recently deep learning methods proposed faster inference trained supervision without work used flow network spynet instead training minimize flow estimation error spynet train jointly video processing network learn flow representation best specific task video processing focus three video processing tasks frame interpolation video denoising video existing algorithms areas explicitly estimate dense motion among input frames reconstruct reference frame according image formation models frame interpolation video denoising refer readers survey articles comprehensive literature reviews flourishing research topics deep learning video enhancement inspired success deep learning researchers directly modeled video enhancement tasks regression problems without representing motions designed deep networks frame interpolation recently differentiable image sampling layers deep learning motion information incorporated networks trained jointly video enhancement task approaches applied video interpolation video interpolation object novel view synthesis eye gaze manipulation superresolution although many algorithms also jointly train flow estimation rest parts network systematical study advantage joint training paper illustrate advantage trained flow toy examples also demonstrate superiority general flow algorithm various tasks also present general framework easily adapt different video processing tasks tasks paper explore three video enhancement tasks frame interpolation video video temporal frame interpolation given low frame rate video temporal frame interpolation algorithm generates high frame rate video synthesizing additional frames two temporally neighboring frames specifically let two consecutive frames input video task estimate missing middle frame temporal frame interpolation doubles video frame rate recursively applied even higher frame rates video given degraded video artifacts either sensor compression video aims remove noise compression artifacts recover original video typically done aggregating information neighboring frames specifically let frames input video task video denoising estimate middle frame iref given degraded frames input ease description rest paper simply call tasks video denoising video similar video denoising given consecutive frames input task video recover middle frame work first upsample input frames resolution output using bicubic interpolation algorithm needs recover component output image flow video processing video processing algorithms two steps motion estimation image processing example temporal frame interpolation algorithms first estimate pixels move input frames frame move pixels estimated location output frame frame similarly video denoising algorithms first register different frames based estimated motion fields remove noises aggregating information registered frames paper propose use flow toflow integrate two steps learn flow design trainable network three parts figure flow estimation module estimates movement pixels input frames image transformation module warps frames reference frame image processing module performs video interpolation denoising registered frames flow estimation module jointly trained rest network learns predict flow field fits particular task toy example discussing details network structure first start two synthetic sequences demonstrate toflow outperform traditional optical flows left video denoising frame interpolation input frames input frames case ground truth flows flow warped interpolated flow frame case flows toflow warped interpolated frame toflow case ground truth flows flow warped toflow denoised frame case flows toflow warped toflow denoised frame figure toy example demonstrates effectiveness task oriented flow traditional optical flow see section details figure shows example frame interpolation green triangle moving bottom front black background warp first third frames second even using ground truth flow case left column obvious doubling artifact warped frames due occlusion case middle column problem optical flow literature final interpolation result based two warp frames still contains artifact case right column contrast toflow stick object motion background static motion case left column toflow however barely artifact warped frames case middle column interpolated frame looks clean case right column hallucinated background motion actually helps reduce doubling artifacts shows toflow reduce errors synthesize frames better ground truth flow similarly right figure show example video denoising random small boxes input frames synthetic noises warp first third frames second using ground truth flow noisy patterns random squares remain denoised frame still contains noise case right column shadows boxes bottom warp two frames using toflow case left column noisy patterns also reduced eliminated case middle column final denoised frame base contains almost noise also shows toflow learns reduce noise input frames inpainting neighboring pixels flow network input diff scales frame spn flow net reference motion used interp frame motion motion image processing network interpolation mask improc net output frame warped frame motion mask masked frame motion mask masked frame spn flow net input frames motion fields flow estimation warped input transformation image processing interpolated frame warped frame figure left model using flow video processing given input video first calculate motion frames flo estimation network warp input frames reference using spatial transformer networks aggregate warped frames generate output image right top detailed structure flow estimation network orange network left right bottom detailed structure image processing network interpolation gray network left traditional flow discuss details module follows later modules network transform first third frames second frame synthesis flow estimation module image transformation module flow estimation module calculates motion fields input frames sequence frames interpolation denoising select middle frame reference flow estimation module consists flow networks structure share set parameters flow network orange network figure takes one frame sequence reference frame input predicts motion use motion estimation framework proposed handle large displacement frames network structure shown top right subfigure figure input network gaussian pyramids reference frame another frame rather reference scale takes frames scale upsampled motion fields previous prediction input calculates accurate motion fields uses flow network three shown figure yellow networks small modification frame interpolation reference frame frame input network synthesize deal motion estimation module interpolation consists two flow networks taking first third frames input predict motion fields second frame first third respectively motion fields using predicted motion fields previous step image transformation module registers input frames reference frame use spatial transformer networks registration synthesizes new frame transformation using bilinear interpolation one important property module gradients image processing module flow estimation module learn flow representation adapts different video processing tasks image processing module use another convolutional network image processing module generate final output task use slightly different architecture please refer appendix details occluded regions warped frames mentioned section occlusion often results doubling artifacts warped frames solve interpolation algorithms estimate occlusion masks use pixels occluded interpolation inspired also design optional mask prediction network frame interpolation addition image processing module mask prediction network takes two estimated motion fields input one frame frame frame frame bottom right figure predicts two occlusion masks mask input input warp epicflow warp epicflow warp toflow toflow interp mask warp toflow toflow interp use mask figure comparison epicflow interpolation toflow interpolation without mask warped frame frame mask warped frame frame invalid regions warped frames masked multiplying corresponding masks middle frame calculated another convolutional neural network warped frames masked warped frames input please refer appendix details network structure even without mask prediction network flow estimation mostly robust occlusion shown third column figure warped frames using toflow little doubling artifacts therefore two warped frames without learned masks network synthesizes decent middle frame top image right column mask network helps remove tiny artifacts faint ghost bottom thumb circled white visible zoomed training accelerate training procedure first modules network together details described flow estimation network flow network consists two steps first tasks motion estimation network sintel dataset realistically rendered video dataset ground truth optical flow minimizing difference estimated optical flow ground truth second step video denoising noisy blurry input frames improve robustness input video interpolation frames video triplets input minimizing difference estimated optical flow ground truth flow enables flow network calculate motion unknown frame frame given frames input empirically find improve convergence speed mask network also occlusion mask estimation network video interpolation optional component video processing network joint training two occlusion masks estimated together network optical flow input network trained minimizing loss output masks occlusion masks joint training train modules jointly minimizing loss recovered frame ground truth without supervision estimated flow fields optimization use adam weight decay run epochs batch size tasks learning rate superresolution learning rate interpolation dataset acquire high quality videos video processing previous works take videos resulting video datasets small size limited terms content alternatively resort vimeo many videos taken professional cameras diverse topics addition search videos without compression frame compressed independently avoiding artificial signals introduced video codecs many videos composed multiple shots use simple shot detection algorithm break video consistent shots use gist feature remove shots similar scene background result collect new video dataset vimeo consisting videos independent shots different content standardize input resize frames fixed resolution shown figure frames sampled dataset contain diverse content indoor outdoor scenes keep consecutive frames average motion magnitude pixels right column figure shows histogram flow magnitude whole dataset flow fields calculated using spynet generate three benchmarks dataset three video enhancement tasks studied paper vimeo interpolation benchmark select frame triplets video clips following three criteria interpolation task first pixels motion larger pixels neighboring frames criterion removes static videos second difference reference warped frame using optical flow calculated using spynet pixels maximum intensity level image removes frames large intensity change hard frame interpolation third average difference motion fields neighboring frames less pixel removes motion frequency flow magnitude sample frames frequency flow frequency flow magnitude image mean flow frequency figure dataset sampled frames dataset demonstrating high quality wide coverage dataset histogram flow magnitude pixels histogram mean flow magnitude images flow magnitude image average flow magnitude pixels image interpolation algorithms including based linear motion assumption vimeo benchmark select frame septuplets video clips denoising task using first two criteria introduced interpolation benchmark video denoising consider two types noises gaussian noise standard deviation mixed noises including noise addition gaussian noise video deblocking compress original sequences using ffmpeg codec format quality value vimeo benchmark also use set septuplets denoising build vimeo benchmark factor resolution input output images respectively generate videos input use matlab imresize function first blurs input frames using cubic filters downsamples videos using bicubic interpolation methods vimeo interp dvf dataset psnr ssim psnr ssim spynet epicflow dvf adaconv sepconv fixed flow fixed flow mask toflow toflow mask table quantitative comparison different frame interpolation algorithms vimeo interpolation test set dvf test set frame interpolation datasets evaluate two datasets vimeo interpolation benchmark dataset used evaluation metrics use two quantitative measure evaluate performance interpolation algorithms peak ratio psnr structural similarity ssim index section evaluate two variations proposed network first one train module separately first motion estimation train video processing fixing flow module similar video processing algorithms refer fixed flow one jointly train modules described section refer toflow networks trained vimeo benchmarks collected evaluate two variations three different tasks also compare image processing algorithms baselines first compare framework interpolation algorithms motion estimation use epicflow spynet handle occluded regions mentioned section calculate occlusion mask frame using use regions interpolate middle frame compare models deep voxel flow dvf adaptive convolution adaconv separable convolution sepconv last also compare fixed flow another baseline interpolation algorithm epicflow adaconv sepconv fixed flow toflow ground truth figure comparison different frame interpolation algorithms views shown lower right dataset dataset dataset input noisy frame fixed flow toflow ground truth toflow ground truth figure comparison different algorithms video denoising differences clearer results table shows quantitative results vimeo interpolation benchmark toflow general outperforms others interpolation algorithms traditional interpolation algorithms epicflow spynet recent based algorithms dvf adaconv sepconv significant margin moreover even model trained dataset also dvf dvf dataset psnr ssim also significant boost fixed flow showing network learn better flow representation interpolation joint training figure also shows qualitative results algorithms epicflow fixed flow generate doubling artifacts like hand first row head second row adaconv sides doubling artifacts tends generate blurry output directly synthesizing interpolated frames without motion module sepconv increases sharpness output frame compared adaconv still artifacts see hat bottom row compared methods toflow correctly recovers sharper boundaries fine details even presence large motion baselines compare framework standard deviation gaussian noise additional input two grayscale datasets also compare fixed flow variant framework two rgb datasets results two rgb datasets vimeomixed toflow beats fixed flow two measurements shown table output toflow also contains less noise differences clearer shown left side figure shows toflow learns motion field denoising video two grayscale datasets toflow outperforms ssim even finetuned dataset note even though toflow achieves comparable performance psnr output toflow much sharper shown figure words billboard kept denoised frame toflow top right figure leaves tree also clearer bottom right figure therefore toflow beats ssim better reflects human perception psnr setup first train evaluate framework vimeo denoising benchmark either gaussian noise mixture noise compare network monocular video denoising algorithm transfer videos vimeo denoising benchmark grayscale create vimeobw gaussian noise retrain network also evaluate framework dataset video deblocking table shows toflow outperforms figure also shows qualitative comparison toflow fixed flow note compression artifacts around girl hair top man nose bottom completely removed toflow vertical line around man eye bottom due blocky compression also removed algorithm input compressed frames ground truth toflow fixed flow figure comparison algorithm video deblocking difference clearer psnr ssim psnr ssim psnr ssim psnr ssim fixed flow toflow methods table quantitative comparisons video denoising input methods vimeo bayessr psnr ssim psnr ssim full clip deepsr bayessr frame bicubic deepsr bayessr frames fixed flow toflow table results video clip vimeo contains frames clip bayessr contains frames video datasets evaluate algorithm two dataset vimeo benchmark dataset provided bayessr later one consists sequences frames baselines compare framework bicubic upsampling two video algorithms bayessr use version provided deepsr well baseline fixed flow estimation module bayessr deepsr take various number frames input therefore bayessr datset report two numbers one using whole sequence use seven frames middle toflow fixed flow take frames input methods psnr ssim fixed flow toflow table results video deblocking results table shows quantitative results algorithm performs better baseline algorithms using frames input also achieves comparison performance bayessr bayessr uses frames input framework uses frames show qualitative results figure compared either deepsr fixed flow jointly trained toflow generates sharper output notice words cloth top tip knife bottom clearer frame synthesized toflow shows effectiveness joint training experiments train evaluate network nvidia titan gpu input clip resolution network takes interpolation denoising input resolution network flow module takes estimated motion field last figure also visualizes motion fields learned different tasks even using network structure taking input frames estimated flows different tasks different flow field interpolation smooth even occlusion boundary flow field artificial movements along texture edges indicates network may learn encode different information useful different tasks learned motion fields conclusion work propose novel video processing model exploits motion cues traditional video bicubic deepsr fixed flow toflow ground truth figure comparison different algorithms shown top left result differences clearer input flow interpolation flow denoising flow flow deblocking figure visualization motion fields different tasks processing algorithms normally consist two steps motion estimation video processing based estimated motion fields however genetic motion tasks might suboptimal accurate motion estimation would neither necessary sufficient tasks framework bypasses difficulty modeling motion signals loop evaluate algorithm also create new dataset video processing extensive experiments temporal frame interpolation video video demonstrate algorithm achieves performance acknowledgements work supported nsf nsf facebook shell research toyota research institute references kothari lee natsev toderici varadarajan vijayanarasimhan video classification benchmark baker scharstein lewis roth black szeliski database evaluation methodology optical flow ijcv brox bruhn papenberg weickert high accuracy optical flow estimation based theory warping eccv butler wulff stanley black naturalistic open source movie optical flow evaluation eccv caballero ledig aitken acosta totz wang shi video temporal networks motion compensation cvpr fischer dosovitskiy ilg golkov van der smagt cremers brox flownet learning optical flow convolutional networks iccv ganin kononenko sungatullina lempitsky deepwarp photorealistic image resynthesis gaze manipulation eccv ghoniem chahir elmoataz nonlocal video denoising simplification inpainting using discrete regularization graphs signal horn schunck determining optical flow artif huang wang wang bidirectional recurrent convolutional networks nips jaderberg simonyan zisserman spatial transformer networks nips kappeler yoo dai katsaggelos video convolutional neural networks ieee tci kingma adam method stochastic optimization iclr liao tao jia video via deep learning cvpr liu freeman video denoising algorithm based reliable motion estimation eccv liu sun bayesian approach adaptive video super resolution cvpr liu sun bayesian adaptive video super resolution ieee tpami liu yeh tang liu agarwala video frame synthesis using deep voxel flow iccv liao tao jia handling motion blur cvpr maggioni boracchi foi egiazarian video denoising deblocking enhancement separable nonlocal spatiotemporal transforms ieee tip mathieu couprie lecun deep video prediction beyond mean square error iclr dense estimation segmentation optical flow robust techniques ieee tip nasrollahi moeslund comprehensive survey mva niklaus mai liu video frame interpolation via adaptive convolution cvpr niklaus mai liu video frame interpolation via adaptive separable convolution iccv oliva torralba modeling shape scene holistic representation spatial envelope ijcv ranjan black optical flow estimation using spatial pyramid network cvpr revaud weinzaepfel harchaoui schmid epicflow interpolation correspondences optical flow cvpr tao gao liao wang jia deep video iccv varghese wang video denoising based spatiotemporal gaussian scale mixture model ieee tcsvt wang zhu kalantari efros ramamoorthi light field video capture using hybrid imaging system siggraph wedel cremers pock bischof regularization high accuracy optic flow cvpr werlberger pock unger bischof optical flow guided video interpolation restoration emmcvpr ranftl koltun accurate optical flow via direct cost volume processing cvpr harley derpanis back basics unsupervised learning optical flow via brightness constancy motion smoothness eccv workshop wang chen video frame interpolation exploiting interaction among different levels ieee tcsvt zhou tulsiani sun malik efros view synthesis appearance flow eccv zitnick kang uyttendaele winder szeliski video view interpolation using layered representation acm tog appendices additional qualitative results addition qualitative results shown main text figures show additional results following benchmarks vimeo interpolation benchmark figure vimeo denoising benchmark figure grayscale videos vimeo deblocking benchmark figure vimeo benchmark figure avoid randomly select testing images test datasets show figures main text differences different algorithms clearer zoomed flow estimation module used spynet flow estimation module consists network structure independent set parameters consists sets convolutional zero padding batch normalization relu layers number channels convolutional layer respectively input motion first network zero motion field image processing module use slight different structures image processing module different tasks temporal frame interpolation without masks build residual network consists averaging network residual network averaging network simply averages two transformed frames frame frame respectively residual network also takes two transformed frames input calculates difference actual second frame average two transformed frames convolutional network consists three convolutional layers followed relu layer kernel sizes three layers respectively zero padding numbers output channels respectively final output summation outputs two networks averaging network residual network video image processing module uses convolutional structure convolutional layers relu layers interpolation without residual structure also tried residual structure significant improvement video image processing module consists pairs convolutional layers relu layers kernel sizes four layers respectively zero padding numbers output channels respectively mask network similar flow estimation module mask estimation network also convolutional neural network pyramid figure level consists structure sets convolutional zero padding batch normalization relu layers independent set parameters output channels respectively first level input mask network estimated flow diff scales masks masks masks figure structure mask network network concatenation two estimated optical flow fields channels concatenation output concatenation two estimated masks channel per mask second level inputs network switch concatenation two estimated optical flow fields resolution masks previous level resolution twice previous level way first level mask network estimates rough mask rest refines high frequency details mask epicflow adaconv sepconv fixed flow toflow ground truth figure qualitative results video interpolation samples randomly selected vimeo interpolation benchmark differences different algorithms clear zoomed input fixed flow toflow ground truth input toflow ground truth figure qualitative results video denoising top five rows results color videos bottom rows grayscale videos samples randomly selected vimeo denoising benchmark differences different algorithms clear zoomed input toflow ground truth figure qualitative results video deblocking samples randomly selected vimeo deblocking benchmark differences different algorithms clear zoomed bicubic deepsr fixed flow toflow ground truth figure qualitative results video samples randomly selected vimeo benchmark differences different algorithms clear zoomed deepsr originally trained images evaluated frames experiment artifacts | 1 |
world computer science information technology journal wcsit issn vol intelligent emergency message broadcasting vanet using pso ghassan samara tareq alhmiedat department computer science zarqa university zarqa jordan department information technology tabuk university tabuk saudi arabia new type mobile hoc network called vehicular hoc networks vanet created fertile environment research research protocol particle swarm optimization contention based broadcast pcbb proposed fast effective dissemination emergency messages within geographical area distribute emergency message achieve safety system research help vanet system achieve safety goals intelligent efficient way pso vanet message broadcasting emergency system safety system new techniques system aim make intelligent vehicle think communicate vehicles act prevent hazards introduction recent year rapid development wireless communication networks made car car car infrastructure communications possible mobile hoc networks manets given birth new type high mobile manet called vehicular hoc networks vanet creating fertile area research aiming road safety efficient driving experience infotainment information entertainment vanet safety applications depend exchanging safety information among vehicles communication vehicle infrastructure communication using control channel see figure creating safety system road important critical concern human today year nearly million people die result road traffic accidents deaths day half people travelling car injuries fifty times number number cars approximately estimated million cars around world annually constant increase million car around world constant raise estimated number cars nowadays exceeding one billion raise possibility increase number crashes deaths roads road traffic accidents predicted become fifth leading cause death world resulting estimated million death year stated world health organization besides traffic congestion makes huge waste time fuel makes developing efficient safety system urgent need road figure vanet structure vanet safety communication made two means periodic safety message called beacon paper event driven message called emergency message paper sharing one control channel beacon messages status messages containing status information sender vehicle like position speed heading beacons provide fresh research funded deanship research graduate studies zarqa jordan wcsit information sender vehicle surrounding vehicles network helping know status current network predict movement vehicles beacons sent aggressively neighboring vehicles messages second depending one forwarder enough high mobile network like vanet furthermore authors depend beacons gain information proposed use hello message creates chance increase channel load emergency messages messages sent vehicle detect potential dangerous situation road information disseminated alarm vehicles probable danger could affect incoming vehicles vanet high mobile network nodes moving speeds may exceed means vehicle move even vehicles far danger reach soon milliseconds important avoid danger contention period schemes waiting time receiver waits rebroadcasting original message received sender proposed many researchers authors proposed distributed broadcast ldmb receivers emergency message potential forwarders forwarder computes waits contention time using equation contention time ends forwarder start rebroadcast emergency message emergency messages vanet sent broadcast fashion vehicle inside coverage area sender receive message coverage area enough hardly reaches dsrc communication range due attenuation fading effects away vehicles danger receive critical information avoid danger furthermore probability message reception reach short distances low half communication range moreno therefore technique increase emergency message reception high reliability availability authors proposed message forwarding strategy sending emergency message broadcast fashion selecting best forwarder available vehicles receiving message potential forwarders order decide node forwards message receivers assigned contention window waiting time contention window size smallest farthest node biggest size nearest node words protocol give priority farthest node next forwarder problem last two protocols message receivers compute waiting time wait make rebroadcast even closest vehicles sender make entire network vehicles busy message received duo high mobility vehicles distribution nodes within network changes rapidly unexpectedly wireless links initialize break frequently unpredictably therefore broadcasting messages vanets plays crucial rule almost every application requires novel solutions different form networks broadcasting messages vanets still open research challenge needs efforts reach optimum solution another protocol proposed called emergency message dissemination vehicular emdv protocol enabling farthest vehicle within transmission range make rebroadcasting emergency message choosing one forwarder vehicle appropriate high mobile network like vanet position always changing receiver vehicle may become range sending message simply receiver receive message channel problems like jam denial service see figure broadcasting requirements high reliability high dissemination speed short latency well communications problems associated regular broadcasting algorithms high probability collision broadcasted messages lack feedback hidden node problem paper concerned proposing new intelligent broadcasting technique emergency message vanet aiming increase reception emergency information research background emergency message rebroadcast authors proposed broadcast scheme utilizes neighbor information exchanging hello messages among vehicles probable danger detected warning message broadcasted neighbors farthest vehicle selected forwarder depending information gained hello message preselected forwarder receives message rebroadcast figure sender utilizing emdv authors proposed receivers message select random waiting times make acknowledgment wcsit avoid nodes closer original sender emergency message rebroadcast network segments another way rebroadcast message divide network segments proposed acknowledgment scheme causes delay rebroadcast authors proposed protocol called urban hop broadcast umb aiming maximize message progress avoid broadcast storm hidden node reliability problems protocol assigns duty forwarding acknowledging broadcast packets one vehicle dividing road portion inside transmission range segments choosing vehicle furthest segment without prior topology information source node transmits broadcast control packet called request broadcast rtb contains position source segment size receiving rtb packet nodes compute distance sender receiver nodes transmit channel jamming signal called contains several equal distance source number segments farther distance longer burst node transmits senses channel channel concludes farthest node source node returns ctb control packet containing identifier source authors proposed forwarding cbf protocol vehicle sends packet broadcast message neighbors receiving packet neighboring vehicle contend forwarding packet node maximum progress destination shortest contention time first rebroadcast packet nodes receive rebroadcast message stop contention delete previously received message protocol mainly proposed forwarding periodic safety message beacons problem protocol management technique manage contention neighboring vehicles chance nearest vehicle sender may hear rebroadcast another vehicle vehicle rebroadcast message called hidden node problem tobagi kleinrock also may lead broadcast storm problem makes protocol useless authors suggested emergency message rebroadcasted receivers located farther distances sender selection shorter waiting times see equation smart broadcasting protocol addressed objective umb using different methodology upon reception rtb message vehicle determine segment set random time segment contention window size segment contention window size vehicles furthest segment randomly choose time vehicles next nearer segment choose value vehicles near sender wait longer time authors proposed contention based broadcasting cbb protocol increasing emergency message reception performance emergency message broadcasted fashion forwarders selected original message sent cbb proven achieve superiority emdv protocol choses one forwarder rebroadcast emergency information gives message chance overcome preselected forwarder failure vehicles decrement backoff timers one listening physical channel waiting vehicle receives valid ctb message exit contention time phase listen incoming broadcast contrary node finishes backoff timer send ctb containing identity rebroadcast incoming broadcast criteria choosing forwarders depends progress segment localization see figure vehicles located final segment potential forwarder authors proposed geographic random forwarding geraf protocol divides network equally adjacent sectors transmitter source elects sectors starting farthest one sending rtb message nodes elected sectors reply ctb message one node reply ctb message node become next forwarder one node sent ctb message source issue collision message make procedure elect next forwarder depending probabilistic rule many approaches discussed details previous paper emergency message broadcasting iii proposed protocol figure emergency message sending transmission range section presents detailed design description pcbb protocol aims increase percentage wcsit reception emergency information utilizing contention window position based forwarding scheme pso intelligent technique sending message single hop enables reach number vehicles within limited distance best cases however number increased order warn vehicles possible dangers reach danger area beacons emergency messages received neighboring vehicles high probability reliability critical nature information provide vehicle detects danger issues emergency message warn vehicles within network vehicles opposite direction sender movement located transmission range must receive message covering whole area guarantee vehicles receive message channel collisions fading effects percentage emergency message reception network vehicles must high possible sen paper proposes categorize emergency message sending make easier receiver vehicle recognize importance message received table lists codes category example vehicle receives two messages containing categories processes message contains category first contains critical information assigning message code sender add data message coordinates danger zone receiver however aspect would discussed detail current study proposed structure emergency message shown figure three inputs namely cid minb maxb added help receiver vehicle determine action take receiving emergency message safety life cooperative collision warning safety intersection warning safety transit vehicle signal priority toll collection service announcement movie download hours mpeg minb maxb mentioned earlier network divided several segments help vehicle determine next forwarder emergency message proposed transmission range sender divided segments make easier sender determine last vehicle last segment eventually selected next forwarder paper distance sender forwarder authors established fixed distance current study however distance sender farthest vehicle application emergency break cid mentioned earlier assigning forwarding job emergency message receiver vehicles message may cause broadcast storm problem assigning forwarding job one receiver vehicle may appropriate sometimes specific forwarder may receive emergency message hence vehicles last segment furthest one make forwarding emergency message forwarder fails receive forward message table emergency message classification safety life data choosing next candidate forwarder process begins gathering information obtained beacons received neighbors information inserted ordered sender vehicle chooses farthest vehicle assigns candidate forwarder process forwarding emergency message increase probability reception forwarded signal communicate vehicles road reach longer distances option used paper choosing one forwarder inappropriate high mobile networks vanet forwarder might receive emergency message solve problem dividing network several segments proposed vehicles inside last segment farthest segment sender wait period time determine whether candidate forwarder rebroadcasted emergency message none made rebroadcast vehicles located farthest segment forwards message mentioned earlier assigning forwarding job emergency message receiver vehicles message may cause broadcast storm problem assigning forwarding job one receiver vehicle may inappropriate specific forwarder may receive emergency message hence vehicles last segment must forward emergency message forwarder fails receive forward message every beacon received vehicle provides important information sender status information utilized form rich real time image current network topology facilitates better network vehicle communication also helps informed potential dangers occur vehicle problem detects problem determines problem life critical life critical safety life messages given highest priority processed sent kind messages msg sen sender code message code time stamp msg message data data sent cid forwarder candidate minb minimum boundary maxb maximum boundary preparing send priority figure emergency message illustration order cover wider area message reception neighboring vehicles serve potential forwarders forwarder wait certain period time contention time forwarding message code code wcsit anything beyond considered distance last vehicle sender computed using equation segment expanded include vehicles could determined using equation dis distance last vehicle sender senpos position sender obtained gps forpos forwarder position last vehicle last segment calculation doubles size last segment increases number potential forwarders calculated number remains sucper nmax minb could recalculated multiplying dif technique increases number potential forwarders solves preselected forwarder rebroadcast failure determining boundaries last segment must set dynamically depending channel status network topology available would pointless segment contain enough number vehicles forwarding time determining number sufficient vehicles located last segment must also depend channel status network topology sender vehicle information required analyze channel draw network topology compute boundaries cbb pcbb protocols proposed cbb protocol depends selection boundaries last segment based number vehicles located segment number segments network suggested number segments segments computing segments boundaries could done using equations equation assigns distance sender farthest vehicle forwarder boundary last segment equation computes length segment equation finds location minimum boundary minb minimum boundaries borders last segment starts nmax maximum number segments dif length segment pcbb enhancement cbb works sender vehicle analyzes dense locations vehicles along transmission range figure vehicle analyzes location density network form groups dense locations resulting network divided several groups figure figure vehicle analysis location density means vehicles located area minb maxb sender considered potential forwarders emergency message rebroadcast vehicle makes rebroadcast sometimes last segment may insufficient number vehicles number potential forwarders must threshold determine sufficient figure analyzed network depending network density progress dense concentrated area number vehicles within small area thus sending message vehicles concentrated areas increases chance receiving rebroadcasting message probability receiving rebroadcasted messages segment also high thus eliminating hidden node problem considered one difficult problems encountered rebroadcasting emergency message vanet equation used calculate vehicles compute dense locations progress represents upper bound last segment length segment also distance farthest vehicle segment first vehicle located segment example segment progressed vehicles meters location sender number could generated tested using equation sucper success percentage last segment must fulfill agreeing values maxb minb nein total number neighbor vehicles sucper nmax means last segment holds enough number potential forwarder vehicles result subtracted one vehicle last segment also holds preselected potential forwarder sucper nmax means area last wcsit segments higher progress high number vehicles smaller segments give higher fitness function maxb highest boundary border last segment minb minimum boundary borders segment starts performing equation vehicle inserts progress list helps vehicle making quicker analysis decisions table means vehicles located area minb maxb sender considered potential forwarders emergency message would wait rebroadcast case vehicle forwards message sender decides broadcast emergency message examine number neighbors within back end coverage area number one protocol could carried number neighbors zero sender broadcasts message without specifying forwarder number neighbors equal one sender broadcasts message specifies forwarder without adding detail boundaries table progress list progress length segment vehicles fitness function compute contention time equation performed vehicles largest distance sender shortest contention time wait testing channel rebroadcast vehicle tests progress sender dividing current position maximum distance computed sender result equation gives waiting time contending vehicles inside last segment giving opportunity vehicles inside last segment recover failure chosen forwarder thus protocol increases probability resending emergency message consequently increasing percentage sending emergency message reaching longer distances time performing equation vehicles segments sender vehicle takes upper boundary segment scores higher fitness function pso optimization applied fitv lbestv pbestv lbestv gbestv lbestv lbestv pbestv fitv random number rand random number pbest last lbest computed vehicle inertia weight particles random random two uniformly distributed random numbers range specific parameters control relative effect individual global best particles enabling last segment contend eliminates hidden node problem potential forwarders high probabilities sense rebroadcasted message forwarder resends message potential forwarders located small limited area probability reception reach short distances could low half distance communication range lbest vehicle obtained fitness function computed using represents best area segment dimension results indicate sufficient number vehicles depending sender analysis pbest previous fitness function computed vehicle gbest best fitness function computed vehicle obtained analysis information crnt obtained previous paper crnt gives extended information received vehicles located neighborhood sender reduces error possibility vehicle might make channel dense location analysis pso depends taking neighbor information history crnt vehicle conclude another global fitness function neighboring vehicles analysis influences current analysis done current vehicle sending steps sender dispatches emergency message warning vehicles potential danger sender analyzes danger selects code sender creates neighbors selects next forwarder depending distance farthest vehicle sender analyzes dense location computing fitness function using equation sender analyzes information gained neighbors dense locations crnt concludes gbest pso algorithm applied obtain minb represents lower bound segment sender creates message inserts values derived steps message broadcasts message network compute boundaries last segment sender carries following equations following illustrates calculations using equation computes fitness function segment formula ensures vehicles high progress sender large number vehicles small area produce better fitness function vehicles concentrated small area located wcsit far sender vehicle better opportunities rebroadcast emergency message little chance failure contention time segment vehicle wait checking system see emergency message rebroadcasted vehicle tslot system time slot table employing equation receiving emergency message steps section represents steps receiving emergency message must done efficiently receiver accepts message checks code receiver also checks message received forwarder rebroadcasts message immediately receiver calculates distance current position sender tests current location falls within minb maxb receiver prepares forward message best value fitness function value means lbest represents lower boundary segment gbest taken crnt protocol provides sender vehicle information vehicles information also enables sender analyze channel depending information vehicle giving accurate data network rebroadcasting steps rebroadcasting job assigned limited number vehicles appropriate assign job receiver vehicles lead broadcast storm problem hidden node problem first forwarder first candidate selected sender forwarding steps follows pbest channel analysis history network made sender vehicle lbest boundary fitness function gbest best analysis neighbors lbest apply pso equation lbest forwarder waits random back time depending contention window back time used avoid channel collision forwarder senses channel tests whether message transmitted others vehicle rebroadcasts message forwarder reserves channel forwarder broadcasts message contention window first candidate forwarder lbest minb example maxb obtained equation results imply vehicles sender considered potential forwarders would contend means time rebroadcast able overcome preselected forwarder failure vehicle inside last segment required contend trying send emergency message could done using following steps forwarder computes contention window using equation vehicle contends random back time depending contention window vehicle senses channel determines whether message transmitted others vehicle rebroadcasts message vehicle reserves channel broadcasts message receiving message vehicles vanet receive messages time messages could beacon emergency service messages message analyzed determine level importance involved message holding safety critical information message code message given higher priority processing receiver vehicle also ensure message received eliminate duplication receiver checks forwarder current receiver forwarder receiver rebroadcasts message immediately receiver forwarder must compute distance position sender determine receiver located within last segment receiver located last nonempty segment starts contending prepares rebroadcast vehicle inside last segment must contend time compute time waiting time using equation depends progress vehicles largest progress sender shortest contention time testing channel make rebroadcast pcbb cbb goal increasing percentage reception emergency information main difference however potential forwarder selection cbb depends choosing last segment boundaries depending number vehicles segment predefined threshold whereas pcbb depends selecting boundaries vehicle saturation areas utilizes pso intelligent technique simulation simulation setup order test correctness protocol made simulation using commercial program distribution used nakagami distribution wcsit parameters used simulation summarized table simulations paper adopt parameters pcbb achieve better performance past first distance representing dsrc communication range sender signal gets weak last meters sender communication range forwarder rebroadcasts message signal becomes stronger reaches greater distances made simulation including vehicles road consisting lanes simulation parameters another difference emdv several tries cbb pcbb never fail rebroadcast emergency message emdv sometimes fails proving effectiveness cbb protocol pcbb protocol also shown select forwarders carefully cbb depends threshold pcbb depends traffic saturation progress analysis made neighboring vehicles furthermore pso adopts pso intelligent algorithm takes vehicles analysis consideration allowing accurate selection preselected forwarders parameters used simulation experiment summarized table simulations paper adopt parameters table simulation configuration parameters parameter radio propagation model value ieee data rate plcp header length symbol duration noise floor snr min max slot time sifs time difs time message size beacon message rate number vehicles road length car speed simulation time road type number lanes neighbor entry size bytes message highway lanes bytes description model fixed value recommended fixed value fixed value fixed value fixed value adjustable add noise signal fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value fixed value figure probability message reception emergency message respect distance sender figure shows message delay cbb emdv compared pcbb emergency message delay simulation computes delay broadcasting rebroadcasting original message showing emdv time slightly higher delay cbb exceeding delay shows slight increase away sender rebroadcast starts take effect cbb shorter delay starting point means rebroadcast efficiency decisions made faster emdv pcbb slightly shorter delay shorter cbb shorter emdv ninth second pso intelligent technique quick performance response safety systems highly mobile network like vanet microseconds critical saving life avoiding danger results order enhance emergency message dissemination vanet two protocols proposed implemented namely cbb pcbb enhancement proposed cbb section emdv protocol outcome project compared cbb pcbb results experiment shown figures test performed concentrated probability emergency message reception channel collision delay protocols may cause emdv dfpav protocols widely used vanet today results project collaboration karlsruhe university figure shows simulation results proposed cbb pcbb protocols simulated tested terms probability emergency message reception afterwards performances compared emdv protocol results show protocols increase performance probability emergency message reception noteworthy fact cbb wcsit references figure delay measured sending emergency message respect distance figure shows collision produced three protocols generated collision broadcasting emergency information worth noting collisions produced cbb pcbb emdv beginning experiment increase however period time sending large number emergency messages resulted increase number collision three protocols difference reaching ninth second figure collision measured sending emergency message conclusion research proposed pcbb aiming improve road safety achieving fast efficient emergency message transmission delivery utilizing efficient newest intelligent technique pso helped make accurate analysis performance increased percentage emergency message reception without affecting channel collision ghassan samara wafaa sures security issues challenges vehicular hoc networks vanet international conference new trends information science service science niss ghassan samara wafaa sures security analysis vehicular hoc nerworks vanet second international conference network applications protocols services netapps world health organization http visited april raya papadimitratos aad hubaux certificate revocation vehicular networks laboratory computer communications applications lca school computer communication sciences epfl switzerland worldometers real time world statistics visited april ghassan samara waha alsalihy ramadass increase emergency message reception vanet journal applied sciences volume pages ghassan samara wafaa alsalihy sureswaran ramadass increasing network visibility using coded repitition beacon piggybacking world applied sciences journal wasj volume number street broadcast smart relay emergency messages vanet international conference advanced information networking applications workshops waina ieee qiong lianfeng broadcast scheme propagation emergency messages vanet ieee international conference communication technology icct ieee biswas tatchikou dion wireless communication protocols enhancing highway traffic safety ieee communications magazine ieee communications assessing information dissemination safety constraints annual conference wireless demand network systems services wons ieee mittag santi hartenstein communication fair transmit power control information transactions vehicular technology ieee communications achieving safety distributed wireless systems protocols paper universitatsverlag karlsruhe isbn widmer kasemann mauve hartenstein forwarding mobile hoc networks hoc networks briesemeister schafers hommel disseminating messages among highly mobile hosts based communication ieee intelligent vehicles symposium ieee korkmaz ekici urban broadcast protocol communication systems acm international workshop vehicular hoc networks acm fasolo zanella zorzi effective broadcast scheme alert message propagation vehicular hoc networks ieee int conf communications ieee zorzi rao geographic random forwarding geraf hoc sensor networks energy latency performance ieee transactions mobile computing ieee ieee white paper dsrc technology dsrc industry consortium dic prototype team wcsit neo project http malaga university visited march tseng chen sheu broadcast storm problem mobile hoc network annual international conference mobile computing networking acm network wheels project http accessed may mendes population topologies influence particle swarm performance phd thesis universidade minho wait contention time channel idle rebroadcast rebroadcast end else backoff backoff end end end end end end procedure appendix procedure detectdanger gather neighbor information select main forwarder maxb forwarderlocation senderlocation minb pso insert emergency information message send message end procedure procedure rebroadcast rebroadcast emergency message end procedure procedure computevehicleconcentration mindist computemindist arrange descending size take average two successive vehicles average last average number vehicles compared vehicle first vehicle add current segment add current vehicle segment segment vehicles segment vehicles increase number vehicles vehicle location location takes location current vehicle segment segment count dist vehicle location first element location compute width procedure pso currentsegment computevehicleconcentration compute current concentration vehicles pbest cfitness cfitness currentsegment size calculate best result fitness function fitness cfitness cfitness fitness end end lbest cfitness segment segment segment count segment vehicles segment vehicles store number vehicles neighborsegment computevehicleconcentration compute current concentration vehicles gfitness currentsegment size calculate best result current segment else new average double previous value segment count segment count vehicle location location segment vehicles segment segment count progress vehicle location return segment fitness function fitness cfitness gfitness fitness end end gbest gfitness end procedure procedure receiveemermessage code code preselctedforwarder rebroadcast else candidate forwarder compute choose random backoff back figure particle swarm optimization contention based broadcast protocol pcbb procedure detectdanger works vehicle detects danger first step sender order neighbors information select first forwarder afterwards calls pso procedure implements pso algorithm select vehicles overcome preselected forwarder failure wcsit procedure receiveemermessage works vehicle receives emergency message checks receiver forwarder procedure computevehicleconcentration analyzes neighbors discover location vehicles concentration | 9 |
generating ideals defining unions schubert varieties may anna bertiger bstract note computes basis ideal defining union schubert varieties precisely computes basis unions schemes given northwest rank conditions space matrices fixed size schemes given northwest rank conditions include classical determinantal varieties matrix schubert schubert varieties lifted flag manifold space matrices ntroduction compute basis hence ideal generating set ideal defining union schemes given northwest rank conditions respect antidiagonal term scheme defined northwest rank conditions scheme whose defining equations form minors northwest matrix variables take varying values schemes represent generalization classical determinantal varieties defining equations minors matrix variables one geometrically important collection schemes defined northwest rank conditions set matrix schubert varieties matrix schubert varieties closures lift schubert varieties complete flag manifold matrix space general matrix schubert variety partial permutation subvariety matrix space given rank conditions northwest must rank number northwest partial permutation matrix notice set matrix schubert varieties contains set classical determinantal varieties zero locus minors fixed size space matrices fixed size matrix schubert varieties associated honest permutations closures lifts corresponding schubert varieties flag manifold matrix schubert variety honest permutation projection full rank matrices gln sends gln onto schubert variety schubert varieties orbits stratify give basis application led introduction matrix schubert varieties knutson miller showed matrix schubert varieties rich structure corresponding beautiful combinatorics fulton generators basis respect antidiagonal term order initial ideal ideal pipe dream knutson miller show pipe dream complex shellable hence original ideal pipe dreams elements pipe dream complex originally called graphs developed bergeron billey describe monomials polynomial representatives classes corresponding schubert varieties importance schubert varieties hence matrix schubert varieties areas geometry become increasing evident example zelevinsky showed certain quiver varieties sequences vector space maps fixed rank conditions isomorphic date may schubert varieties knutson miller shimozono produce combinatorial formulae quiver varieties using many combinatorial tools reminiscent schubert varieties notation background much background surveyed found let respectively denote group invertible lower triangular respectively upper triangular matrices let matrix variables follows possibly partial permutation written notation entries undefined written shall write permutation even mean partial permutation cases confusion matrix schubert variety closure affine space matrices permutation matrix act downward row rightward column operations respectively notice honest permutation closure lift space matrices rothe diagram permutation found looking permutation matrix crossing cells weakly cells weakly right cell containing remaining empty boxes form rothe diagram essential boxes permutation boxes rothe diagram boxes diagram immediately south east rothe diagrams given figure cases essential boxes marked letter igure rothe diagrams essential sets left right rank matrix permutation denoted gives cell rank permutation matrix example rank matrix theorem matrix schubert varieties radical ideal given determinants representing conditions given rank matrix determinants northwest matrix variables fact sufficient impose rank conditions essential box hereafter call determinants corresponding essential rank conditions analogous determinants ideal generated northwest rank conditions fulton generators one special form ideal generating set basis define basis set total ordering monomials polynomial ring implies monomials let init denote largest monomial appears polynomial basis ideal set init hinit hinit init notice basis necessarily generating set antidiagonal matrix diagonal series cells matrix running northeast southwest cell antidiagonal term antidiagonal determinant product entries antidiagonal example antidiagonal cells occupied correspondingly determinant antidiagonal term term orders select antidiagonal terms determinant called antidiagonal term orders proven especially useful understanding ideals matrix schubert varieties several possible implementations antidiagonal term order matrix variables would suit purposes paper one example weighting top right entry highest decreasing along top row starting deceasing right next row monomials ordered total weight theorem fulton generators form basis antidiagonal term order typically denote cells matrix form antidiagonals follows antidiagonal use notation det denote determinant shall fairly liberal exchanging antidiagonal cells corresponding antidiagonal terms thus antidiagonal term order init det statement result let ideals defined northwest rank conditions produce basis hence ideal generating set list antidiagonals antidiagonal fulton generator produce basis element generators products determinants though simply product determinants corresponding fixed list antidiagonals build generator begin draw diagram dot color box connect consecutive dots color line segment color break diagram connected components two dots connected either connected lines connected lines dots occupy box connected component remove longest series boxes exactly one box row column boxes connected component tie use northwest longest series boxes note need multiply det remove antidiagonal diagram connected component break remaining diagram components repeat theorem antidiagonal fulton generator form basis hence generating set acknowledgements work constitutes portion phd thesis completed cornell university direction allen knutson wish thank allen help advice encouragement completing project thanks also jenna rajchgot helpful discussions early stages work also like thank authors computer algebra system powered computational experiments nessecary work especially grateful mike stillman patiently answered many questions course work kevin purbhoo gave helpful comments drafts manuscript thank enough xamples delay proof theorem section first give examples generators produced given sets antidiagonals examples given pictures antidiagonals left corresponding determinantal equations right note give particular generators rather entire generating sets might quite large give entire ideal generating sets two smaller intersections fulton generator antidiagonal algorithm produces generator det therefore intersect one ideal algorithm returns original set fulton generators generator antidiagonal shown exactly determinant one antidiagonal pictured generator two disjoint antidiagonals product determinants corresponding two disjoint antidiagonals general disjoint antidiagonals algorithm looks separately part separate components result det det overlap form one antidiagonal last step algorithm occur produce det example example two longest possible antidiagonals three cells occupied green dots three cells occupied red dots ones occupied green dots northwest hence generator three antidiagonals shown picture longest possible anti diagonal uses cells green anti diagonal cells red antidiagonal however one possible longest antidiagonal thus generator give two examples complete ideals comparatively small firstly calculate antidiagonals corresponding generators shown antidiagonals generators shown red antidiagonals generators shown blue note antidiagonals one cell case theorem results slightly larger example consider generators given order antidiagonals displayed reading left right top bottom antidiagonals shown red antidigaonals shown blue note full grid displayed northwest portion antidiagonals two ideals may lie theorem produces roof heorem prove main result paper theorem states generate begin fairly general statements theorem knu ideals generated northwest rank conditions init init lemma homogeneous ideals polynomial ring init init lemma let ideals define schemes northwest rank conditions let det det determinants antidiagonals respectively det proof let det varieties corresponding ideals hdet enough show show given matrix antidiagonal antidiagonal northwest cells occupied rank length full matrix rank length corresponding statement antidiagonal proven replacing everywhere basic idea proof know rank conditions rows columns northwest occupied rank conditions given imply rank conditions adding either row column increase rank one column column rank northwest row column rank northwest column row rank row northwest column row row igure proof lemma antidiagonal cells marked black antidiagonal cells marked white let number rows also number columns antidiagonal let length rank condition rows columns northwest occupied assume rightmost column leftmost column notice implies bottom row occupied antidiagonal element column row thus northwest matrices rank notice equality occupies continuous set columns matrices rank northwest adding columns gives rank principle moving rows northwest whole matrix antidiagonal rank hence rank visual explanation proof lemma see figure lemma hence ranges antidiagonals fulton generators proof fix let first antidiagonal containing box occupied box contained added shall show det hence multiple det det either case apply lemma otherwise weakly northwest therefore subset weakly northwest hence antidiagonal determinant lemma det lemma init antidiagonal term order proof init product determinants collective antidiagonals combine lemma theorem see init lemmas combine complete proof theorem note theorem may produce oversupply generators example inputting set fulton generators twice results basis polynomials eferences nantel bergeron sara billey schubert polynomials experiment math william fulton flags schubert polynomials degeneracy loci determinantal formulas duke math daniel grayson michael stillman software system research algebraic geometry available http allen knutson ezra miller geometry schubert polynomials ann math allen knutson ezra miller mark shimozono four positive formulae type quiver polynomials invent math knu allen knutson frobenius splitting degeneration preprint ezra miller bernd sturmfels combinatorial commutative algebra graduate texts mathematics vol new york two remarks graded nilpotent classes uspekhi mat nauk | 0 |
log randomized algorithm problem oct wenbin abstract paper show log randomized algorithm problem metric space points improved previous best competitive ratio log log nikhil bansal focs pages keywords problem online algorithm method randomized algorithm introduction problem schedule mobile servers serve sequence requests metric space minimum possible movement distance manasse introduced ksever problem generalization several important online problems paging caching problems conference version proposed algorithm problem algorithm sever problem metric space still showed deterministic online algorithm problem competitive ratio least proposed conjecture problem metric space different points exists deterministic online algorithm competitive ratio shown conjecture holds two special cases conjecture also holds problem uniform metric special case problem uniform metric called paging also known caching problem slator tarjan proposed algorithm paging problem special metrics line tree existed online algorithms yair bartal email department computer science guangzhou university china state key laboratory novel software technology nanjing university china elias koutsoupias show work function algorithm problem kcompetitive ratio following special metric spaces line star metric space points marek chrobak lawrence larmore proposed algorithm problem trees problem general metric space conjecture remain open fiat first show exists online algorithm competitive ratio depends metric space competitive ratio bound improved later grove showed harmonic algorithm competitive ratio result improved log grove significant progress achieved koutsoupias papadimitriou proved work function algorithm competitive ratio generally people believe randomized online algorithms produce better competitive ratio deterministic counterparts example several log algorithms paging problem log lower bound competitive ratio although much work log lower bound still best lower bound randomized case recently bansal propose first randomized algorithm problem general metrics spcace randomized algorithm competitive ratio log log metric space points improves deterministic competitive ratio koutsoupias papadimitriou whenever problem general metric space widely conjectured log randomized algorithm called randomized conjecture paging problem corresponds problem uniform metric log algorithms weighted paging problem corresponds problem weighted star metric space also log algorithms via online method extensive literature problem found paper show exists randomized algorithm log competitive ratio metric space points improved previous best competitive ratio log log nikhil bansal order get results use online method developed buchbinder naor recent years buchbinder naor used method design online algorithms many online problems covering packing problems problem first propose formulation fraction problem weighted hierarchical tree hst design log online algorithm fraction problem weighted hst depth since hst leaves transformed weighted hst depth log leaf leaf distance distorted constant thus get log log online algorithm fraction problem hst based known relationship fraction problem randomized problem get log log randomized algorithm problem hst points metric embedding theory get log randomized algorithm problem metric space points preliminaries section give basic definitions definition competitive ratio adapted deterministic online algorithm dalg call exists constant request sequence costdalg costop costdalg costop costs online algorithm dalg best offline algorithm respectively randomized online algorithm similar definition competitive ratio definition adapted randomized online algorithm ralg call rcompetitive exists constant request sequence costralg costop costralg expected cost randomized online algorithm ralg order analyze randomized algorithms problem introduce fractional problem fractional problem severs viewed fractional entities opposed units online algorithm move fractions servers requested point definition fractional problem adapted suppose metric space total fractional severs located points metric space given sequence requests request must served providing one unit server requested point moving fractional servers requested point cost algorithm servicing sequence requests cumulative sum distance incurred sever moving fraction server distance costs bartal introduce definition hierarchical tree hst general metric embedded probability distribution internal node distance parent node times distance child node number called stretch hst hst stretch called following give formal definition definition hierarchically trees hsts tree rooted tree whose edges length function satisfies following properties node two children node parent child two leaves fakcharoenphol showed following result lemma randomized algorithm problem requests leaves exists log competitive randomized online algorithm problem metric space points still need definition weighted hierarchically tree introduced definition weighted hierarchically trees weighted hsts weighted rooted tree satisfying property definition property node leaf root parent child banasl show arbitrary depth leaves embedded log depth weighted constant distortion described follows lemma let leaves possibly arbitrary depth transformed weighted depth log leaves factor leaf leaf distance distorted randomized algorithm problem hst paper view problem weighed caching problem cost evicting page cache using another page satisfies triangle inequality point viewed page set points served severs viewed cache holds pages distance two points viewed cost evicting corresponding page cache using corresponding page let denotes set pages denotes cost evicting page cache using page satisfies triangle inequality pages let requested pages sequence time requested page time time step requested page already cache cost produced otherwise page must fetched cache evicting pages cache cost produced section order clearly describe algorithm design idea consider case first give notations let denote hierarchically trees stretch factor let number nodes leaves let denote depth node let denote root node thus leaf let denote depth let denote parent node node denote set children node let denote distance root child denote distance node parent easy know let denote subtree rooted denote set leaves let denote number leaves leaf let denote ancestor node depth thus root time let variable xpi denote fraction pthe cache upi denote fraction pof cache obviously xpi upi node let total fraction pages subtree cache easy see suppose time set initial pages cache pik time request arrives page fetched mass cache evicting page cache evicting cost metric suppose path first common ancestor node definition thus evicting cost cost incurred time since page evicting max thus give formulation fractional problem follows minimize upt subject subtree node leaf node leaf node first primal constraintp states pat time take set vertices total number pages cache lease variables denote total fraction mass pages moved subtree obviously needed define variable root node fourth fifth constraints enforce initial pages cache pik first term object function sum moved cost cache second term enforces requirement page must cache time upt dual formulation follows maximize subject subtree dual formulation variable corresponds constraint type variable corresponds constraint type variable corresponds constraint type based formulation extend design idea bansal primaldual algorithm metric task system problem problem design idea online algorithm described follows execution algorithm always maintains following relation primal variable dual request arrives variable exp time page gradually fetched cache pages gradually moved cache rates completely fetched cache upt decreased rate increased rate upt becomes viewed move mass upt leaf ancestor nodes distribute leaves order compute exact distributed amount page online algorithm maintain following invariants satisfying dual constraints tight dual constraints type leaves node identity property holds node give clearer description online algorithm process time request arrives initially set upt upt upt nothing thus primal cost dual profit zero invariants continue hold upt start increase variable rate step would like keep dual constraints tight maintain node identity property however increasing variable violates dual constraints leaves hence increase dual variables order keep dual constraints tight increasing variables may also violate node identity property makes update dual variables process results moving initial upt mass leaf leaves stop updating process upt become following compute exact rate move mass upt ancestor nodes time leaves space limit put proofs following claims appendix first show one property function lemma duv dbv proof since claim exp take derivative get order maintain node identity property node time increased decreased also required increase decrease children rate connection rates given lemma node increase variable rate following equality dbw dbv need one special case lemma variable increased decreased rate required increasing decreasing rate children lemma get lemma node assume increase decrease variable rate increasing decreasing rate order keep node identity property set increasing decreasing rate child follows dbw repeatedly applying lemma get following corollary corollary node path leaf increased decreased rate increasing decreasing rate children still require following special case lemma let first child node assume increased decreased rate rate increasing decreasing every unchanged following claim hold lemma let children node assume increase decrease rate also increase rate let wdh wdh would like maintain amount unchanged theorem request arrives time order keep dual constraints tight node identity property increased rate decrease every rate dba das ptk sibling increase following rate dbw das thus design online algorithm fractional problem follows see algorithm time set set time request arrives initially set initialized upt nothing otherwise following let since upt increasing rate decrease every rate dba das ptk sibling increase following rate dbw das node path leaf child algorithm online algorithm fractional problem theorem online algorithm fractional problem competitive ratio duru study relationship fractional version randomized version problem given follows lemma fractional problem equivalent randomized problem line circle arbitrary metric spaces thus get following conclusion theorem randomized algorithm competitive ratio problem lemma get following conclusion theorem log competitive randomized algorithm problem metric space log fractional algorithm problem weighted hst depth section first give log fractional algorithm problem weighted depth give another notations weighted hst let weighted node whose depth let child definition weighted node leaf call full node definition full node otherwise call node let set children node node node let denote set leaf nodes let let denotes path root node exists first common ancestor call common ancestor node let denote set common ancestor nodes suppose thatp node thus full node easy know let formulation fractional problem weighted hst hst section based formulation design idea online algorithm similar design idea section execution algorithm keeps following relation primal variable dual variable exp relation determines much mass upt gradually moved leaf distributed among leaves completely fetched cache upt thus time algorithm maintains distribution upn leaves order compute exact rate move mass upt ancestor nodes time leaves weighted using similar argument section get following several claims space limit put proofs appendix lemma duv dbv proof since claim exp take derivative get lemma node increase variable rate following equality dbw lemma node assume increase decrease variable rate increasing decreasing rate order keep node identity property set increasing decreasing rate child follows dbw repeatedly applying lemma get following corollary corollary node path leaf increased decreased rate increasing decreasing rate children lemma let children node node assume increase decrease rate also increase rate would like maintain amount unchanged let wdh wdh theorem request arrives time order keep dual constraints tight node identity property increased rate decrease every rate dba das sibling increase following rate dbw das thus design online algorithm fractional problem weighted follows see algorithm theorem online algorithm fractional problem weighted depth competitive ratio lemma get theorem exists log log fractional algorithm problem nikhil bansal show following conclusion time set time request arrives initially set initialized upt nothing otherwise following let suppose upt increasing rate decrease every rate dba das sibling increase following rate dbw das node path leaf wdh vdh reaches value update set ancestor node algorithm online algorithm fractional problem weighted lemma let online fractional algorithm converted randomized algorithm factor loss competitive ratio thus get following conclusion theorem theorem let randomized algorithm problem competitive ratio log log lemma get following conclusion theorem metric space randomized algorithm problem competitive ratio log conclusion paper metric space points show exist randomized algorithm log ratio problem improved previous best competitive ratio log log acknowledgments would like thank anonymous referees careful readings manuscripts many useful suggestions wenbin chen research partly supported national natural science foundation china nsfc grant research projects guangzhou education bureau grant project state key laboratory novel software technology nanjing university references dimitris achlioptas marek chrobak john noga competitive analysis randomized paging algorithms theoretical computer science avrim blum carl burch adam kalai paging proceedings annual symposium foundations computer science page nikhil bansal niv buchbinder aleksander madry joseph naor polylogarithmiccompetitive algorithm problem focs pages nikhil bansal niv buchbinder joseph seffi naor randomized algorithm weighted paging proceedings annual ieee symposium foundations computer science pages buchbinder jain naor online algorithms maximizing revenue proc european symp algorithms esa buchbinder naor online algorithms covering packing problems proc european symp algorithms esa volume lecture notes comput pages springer buchbinder naor improved bounds online routing packing via approach proc symp foundations computer science pages niv buchbinder joseph naor design competitive online algorithms via approach foundations trends theoretical computer science nikhil bansal niv buchbinder joseph seffi naor towards randomized conjecture approach proceedings annual siam symposium discrete algorithms nikhil bansal niv buchbinder joseph seffi naor metrical task systems ksever problem hsts icalp proceedings international colloquium automata languages programming yair bartal probabilistic approximations metric spaces algorithmic applications proceedings annual ieee symposium foundations computer science pages yair bartal approximating arbitrary metrices tree metrics proceedings annual acm symposium theory computing pages yair bartal bela bollobas manor mendel theorem metric spaces applications metrical task systems related problems proceedings annual ieee symposium foundations computer science pages yair bartal eddie grove harmonic algorithm competitive journal acm yair bartal nathan linial manor mendel assaf naor metric phenomena proceedings annual acm symposium theory computing pages yair bartal elias koutsoupias competitive ratio work function algorithm problem theoretical computer science avrim blum howard karloff yuval rabani michael saks decomposition theorem bounds randomized server problems proceedings annual ieee symposium foundations computer science pages allan borodin ran online computation competitive analysis cambridge university press chrobak larmore optimal algorithm trees siam journal computing meyerson poplawski randomized hierarchical binary trees proceedings annual acm symposium theory computing pages csaba lodha randomized algorithm problem line random structures algorithms jittat fakcharoenphol satish rao kunal talwar tight bound approximating arbitrary metrics tree metrics proceedings annual acm symposium theory computing pages fiat rabani ravid competitive algorithms journal computer system sciences amos fiat richard karp michael luby lyle mcgeoch daniel dominic sleator neal young competitive paging algorithms journal algorithms edward grove harmonic online algorithm competitive proceedings annual acm symposium theory computing pages elias koutsoupias problem computer science review vol pages elias koutsoupias christos papadimitriou conjecture journal acm manasse mcgeoch sleator competitive algorithms online problems proceedings annual acm symposium theory computing pages manasse mcgeoch sleator competitive algorithms server problems journal algorithms lyle mcgeoch daniel sleator strongly competitive randomized paging algorithm algorithmica daniel sleator robert tarjan amortized efficiency list update paging rules communications acm duru problem fractional analysis master thesis university chicago http appendix proofs claims section proof lemma follows proof since required maintain take derivative sides get duv dbv dbv lemma get duw dbw dbw since get dbw dbv dbw proof lemma follows proof lemma increasing decreasing rate get dbv get dbw dbv proof lemma follows proof lemma order keep amount unchanged get dbw thus wdh wdh wdh hence get claim proof theorem follows proof request arrives time move mass upt ancestor nodes leaves upt decreased increased since mass moves subtree decreased exp need keep relation algorithm also decreases hand increased thus node whose contain mass also increased node whose contain must sibling node assume siblings node increase rate following compute increasing decreasing rate dual variables decreasing rate regarding let pda regarding let das increasing rate siblings regarding using top method get set equations quantities first consider siblings nodes children root let one siblings raised corollary sum path leaf must since increasing rate forces order maintain dual constraint tight leaves considers dual constraints leaves increasing mass must canceled decreasing mass since mass changed thus order maintain node identity property root lemma must set siblings node use similar argument let sibling consider path leaf dual constraint already grows rate must canceled increasing raised corollary sum path leaf must thus must set increasing mass must canceled decreasing mass order keep node identity property lemma must set continuing method obtain system linear equations maintaining dual constraints tight get following equations keeping node identity property get following equations continue solve system linear equations since get solving recursion get proof theorem follows proof let denote value objective function primal solution denote value objective function dual solution initially let following prove three claims primal solution produced algorithm feasible dual solution produced algorithm feasible three claims weak duality linear programs theorem follows immediately first prove claim follows time since algorithm keeps primal constraints satisfied second prove claim follows theorem dual constraints satisfied obviously dual constraints satisfied node thus dual constraints satisfied third prove claim follows algorithm increases variables time let compute primal cost depth compute movement cost algorithm change follows dbw asj let denote hence total cost levels movement first inequality holds since thus get let cost best offline algorithm pmin optimal primal solution dmax optimal dual solution pmin since feasible solution primal program based weak duality dmax pmin hence pmin pmin dmax pmin min pmin competitive ratio algorithm proofs claims section proof lemma follows proof since required maintain take derivative sides get dbv duv dbv duw dbw lemma get since get dbw dbw dbw proof lemma follows proof lemma increasing decreasing rate get get dbw dbw dbw dbv proof lemma follows proof lemma order keep amount unchanged get dbw thus wdh wdh wdh hence get claim proof theorem follows proof request arrives time move mass upt ancestor nodes leaves nodes upt decreased increased since mass moves subtree decreased exp need keep relation algorithm also decreases hand increased thus node whose contain mass also increased node whose contain must sibling node assume siblings node increase rate following compute increasing decreasing rate dual variables decreasing rate weighted regarding let pda regarding let das increasing rate siblings regarding using top method get set equations quantities first consider siblings nodes children let one siblings raised corollary sum path leaf must since increasing rate forces order maintain dual constraint tight leaf nodes considers dual constraints leaf nodes increasing mass must canceled decreasing mass since mass changed thus order maintain node identity property lemma must set siblings node use similar argument let sibling consider path leaf node dual constraint already grows rate must canceled increasing raised corollary sum path leaf must thus must set increasing mass must canceled decreasing mass order keep node identity property lemma must set continuing method obtain system linear equations maintaining dual constraints tight get following equations keeping node identity property get following equations continue solve system linear equations since get solving recursion get proof theorem follows proof let denote value objective function primal solution denote value objective function dual solution initially let following prove three claims primal solution produced algorithm feasible dual solution produced algorithm feasible three claims weak duality linear programs theorem follows immediately proof claim similar claim section third prove claim follows algorithm increases variables time let compute primal cost depth compute movement cost algorithm change follows dbw asj since first inequality holds since reason constraint time satisfied otherwise algorithm stop increasing variable since upt algorithm stop increasing variables addition thus total cost depth hence get competitive ratio algorithm | 8 |
exact solution smart grid analysis problem ntroduction modern society relies critically proper operation electric power distribution transmission system supervised controlled supervisory control data acquisition scada systems remote terminal units rtus scada systems measure data transmission line power flows bus power injections part bus voltages send state estimator estimate power network states bus voltage phase angles bus voltage magnitudes estimated states used vital power network operations optimal power flow opf dispatch contingency analysis see fig block diagram functionalities malfunctioning operations delay proper reactions control center lead significant social economical consequences northeast blackout technology use scada systems evolved lot since introduced scada systems interconnected office lans connected internet hence today access points scada systems also functionalities tamper example rtus subjected attacks fig communicated data subjected false data attacks furthermore scada master attacked paper focuses cyber security issue related false data attacks communicated metered measurements subjected additive data attacks false data attack potentially lead erroneous state estimates authors access linnaeus center automatic control lab school electrical engineering kth royal institute technology sweden sou hsan kallej work supported european commission viking project swedish research council grant grant knut alice wallenberg foundation power network rtus rtus agc optimal power flow ems sscada masster index network state estimation security operation research optimization methods state estimatorr paper considers smart grid problem analyzing vulnerabilities electric power networks false data attacks analysis problem related constrained cardinality minimization problem main result shows relaxation technique provides exact optimal solution cardinality minimization problem proposed result based polyhedral combinatorics argument different results based mutual coherence restricted isometry property results illustrated benchmarks including ieee systems sscada masster sep kin cheong sou henrik sandberg karl henrik johansson human operator control center fig block diagram power network control center scada rtus connected substations transmit receive data control center using scada system control center state estimate computed used energy management systems ems send commands power network human figures indicate human needed control loop paper considers false data attack scenario state estimator result gross errors opf dispatch turn lead disasters significant social economical consequences false data attack communicated metered measurements considered literature first point coordinated intentional data attack staged without detected state estimation bad data detection bdd algorithm standard part today system investigate construction problem unobservable data attack especially sparse ones involving relatively meters compromise various assumptions network power flow model particular poses attack construction problem cardinality minimization problem find sparsest attack including given set target measurements set similar optimization problems sparsest attack including given measurement seek sparsest nonzero attack finds sparsest attack including exactly two injection measurements solution information optimization problems help network operators identify vulnerabilities network strategically assign protection resources encryption meter measurements best effect hand unobservable data attack problem connection another vital ems functionality namely observability analysis particular solving attack construction problem also solve observability analysis problem explained section connection first reported utilized compute sparsest critical integer generalization critical measurements critical sets perform analysis timely manner important solve data attack construction problem efficiently effort discussed instance efficient solution attack construction problem focus paper matching pursuit method employed basis pursuit method relaxation weighted variant employed common efficient approaches suboptimally solve attack construction problem however methods guarantee exact optimal solutions cases might sufficient see instance naive application basis pursuit consequences provide solution procedures respective attack construction problems problems therein different one paper furthermore considered problem paper solved special case particular attack vector contains least one nonzero entry however nonzero entry given priori needs restrict number nonzero injection measurements attacked requirement problem considered paper simple heuristics provided find suboptimal solutions attack construction problem heuristics however might sufficiently accurate closely related current work distinctions elaborated section main conclusion paper basis pursuit relaxation indeed solve data attack construction problem exactly assumption network metering system injection measurements metered limitations assumption discussed section fact main result identifies class cardinality minimization problems basis pursuit provide exact optimal solutions class problems include special case considered data attack construction problem assumption outline section describes state estimation model introduces optimization problems considered paper section iii describes main results paper solution considered optimization problems section compares proposed result related works section provides proof proposed main results section numerically demonstrates advantages proposed results tate stimation yber ecurity nalysis ptimization roblems power network model state estimation power network buses transmission lines described graph nodes edges graph topology specified directed incidence matrix direction edges assigned arbitrarily physical property network described nonsingular diagonal matrix rma whose nonzero entries reciprocals reactance transmission lines states network include bus voltage phase angles bus voltage magnitudes latter typically assumed constant equal one per unit system addition since one arbitrary bus assigned reference zero voltage phase angle network states considered captured vector state estimator estimates states based measurements obtained network power flow model measurement vector denoted related qbdb either vector random error intentional additive data attack truncated incidence matrix row corresponding reference node removed consists subset rows identity matrices appropriate dimension indicating line power flow measurements actually taken together vector power flows transmission lines measured analogously matrix selects bus power injection measurements taken qbdb vector power injections buses measured therefore measurement matrix relating measured power quantities network states number rows denoted measurements network information jointly used find estimate network states denoted assuming network observable wellestablished state estimate obtained using weighted least squares approach chapter chapter positive definite diagonal weighting matrix typically weighting accurate measurements state estimate subsequently fed vital scada functionalities opf dispatch therefore accuracy reliability paramount concern detect possible faults measurements bdd test commonly performed see one typical strategy norm residual residual big bdd alarm triggered unobservable data attack security index bdd test general sufficient detect presence contains single random error however face coordinated malicious data attack multiple measurements bdd test fail particular considers unobservable attack form arbitrary since defined would result zero residual unobservable bdd perspective also experimentally verified realistic scada system testbed quantify vulnerability network unobservable attacks introduced notion security index arbitrarily specified measurement security index optimal objective value following cardinality minimization problem minimize subject given indicating security index computed measurement symbol denotes cardinality vector denotes row security index minimum number measurements attacker needs compromise order attack measurement undetected particular small security index particular measurement means order compromise undetected necessary compromise small number additional measurements imply measurement relatively easy compromise unobservable attack result knowledge security indices allows network operator pinpoint security vulnerabilities network better protect network limited resource model case certain measurements protected hence attacked problem becomes minimize subject protection index set given denotes submatrix rows indexed convention constraint ignored hence special case measurement set robustness analysis subject rank rank full column rank following three statements true problem feasible condition satisfied problem feasible conditions satisfied conditions satisfied equivalent see definition section note condition satisfied corresponding measurement removed consideration also since measurement redundancy common practice power networks assumed full column rank therefore conditions proposition justified practice finally note proposition remains true arbitrary matrix necessarily defined iii roblem tatement esult problem statement problem also motivated another important state estimation analysis problem namely observability analysis measurement set described observable uniquely determined important question observability analysis follows minimize given index denotes complement index set rest paper denotes complement meaning follows denotes subset measurements measurement system described condition rank means measurement system becomes unobservable measurements associated lost becomes impossible uniquely determine problem seeks minimum cardinality must include particular given measurement therefore exist measurement leads instance small objective value measurement system robust meter failure special cases extensively studied power system community instance solution label sets cardinalities one two respectively referred critical measurements critical sets containing measurement calculations documented example general cases minimum cardinality solution label set critical contains specified measurement solving solves well justification given following statement inspired proved appendix proposition let given problems denote two conditions discussed previously paper proposes efficient solution security index attack construction problem however proposed result focuses generalization special case special case contain injection measurements limitation assumption discussed section main result presented appendix shown special case assumption equivalent minimize subject instead considering directly proposed result pertains general optimization problem associated totally unimodular matrix determinant every square submatrix either particular following problem main focus paper minimize subject given totally unimodular matrix given since incidence matrix totally unimodular matrix therefore generalization however neither includes special cases statement main result theorem let optimal basic feasible solution defined optimal solution remark theorem provides complete procedure solving via standard form problem feasible contains least one basic feasible solution see definition section together fact objective value bounded zero theorem implies problem contains least one optimal basic feasible solution used construct optimal solution according theorem conversely feasible set empty feasible set must also empty feasible solution used construct feasible solution remark ensure optimal basic feasible solution found one exists simplex method chapter used solve proof theorem given section related work reviewed assumption discussed elated ork relaxation problem cardinality minimization problem general efficient algorithms found solving cardinality minimization problems heuristic relaxation based algorithms often considered relaxation basis pursuit relaxation technique received much attention relaxation instead following optimization problem set solved minimize subject objective function vector replaces cardinality problem rewritten linear programming problem standard form minimize denotes cardinality index set feasible solution feasible hence optimal solution exists corresponds suboptimal solution original problem important question conditions suboptimal solution actually optimal answer provided main result based special structure fact matrix totally unimodular subject rationale injection assumption consider case corresponds line power flow measurements definition verified equivalent following minimize subject indicates considered problem relaxation general case utilizes observation obtains satisfactory suboptimal solution alternatively considers indirectly accounting term objective function demonstrates solving following problem provides satisfactory suboptimal solution minimize subject appropriately defined notice form conclusion injection assumption leads introduces limitation need restrictive might appear proposed result theorem still leads based approach obtain suboptimal solutions hence relationship minimum cut based results nevertheless main strength current result lies fact solves problem matrix totally unimodular includes special case corresponding constraint matrix transposed graph incidence matrix distinguishes current work ones specialize solving using minimum cut algorithms one example totally unimodular associated graph matrix consecutive ones property either row column appear consecutively possible application consider networked control system one controller sensor nodes node contains scalar state value constant period time slots nodes need transmit state values shared channel controller node keep transmitting arbitrary period consecutive time slots time slot measurement transmitted controller sum state values transmitting nodes denote vector measurements transmitted time slots vector node state values measurements states related matrix consecutive ones column solving observability problem identify vulnerable measurement slots higher priority communication relationship compressed sensing type results problem written form common literature consider case null space empty otherwise rank trivial change decision variable posed minimize subject full rank denotes containing entries corresponding index set written cardinality minimization problem considered instance minimize subject appropriately defined matrix vector subsection restrict discussion standard case feasible full rank matrix columns rows certain conditions regarding optimal solution obtained relaxation known example report sufficient condition based mutual coherence denoted defined max sufficient condition states exists feasible solution sparse enough unique optimal solution relaxation problem replacing another sufficient condition based restricted isometry property rip integer rip constant matrix smallest number satisfying vector sufficient condition states rip constant satisfying necessarily unique optimal solution relaxation shown certain type randomly generated matrices satisfy conditions overwhelming probabilities provides result however conditions might apply focus paper instance consider submatrix transpose incidence matrix power network let corresponding implies therefore sparsity bound becomes restrictive practical similarly rip constants least one hence sufficient condition would applicable either nevertheless failure apply sufficient conditions mean impossible show relaxation exactly solve mutual coherence conditions characterize unique optimal solution exists relaxation paper uniqueness required indeed defined optimal verified inspection using cplex solver matlab solve relaxation leads first optimal solution main contribution paper show case general defined even though optimal solution might unique reason proposed result applicable based polyhedral combinatorics argument different mutual coherence rip based results roof esult definitions proof requires following definitions definition two optimization problems equivalent correspondence instances corresponding instances either infeasible unbounded optimal solutions last case possible construct optimal solution one problem optimal solution problem vice versa addition two problems optimal objective value definition polyhedron subset described linear equality inequality constraints standard form polyhedron associated standard form problem instance specified given matrix vector definition basic solution polyhedron vector satisfying equality constraints addition active constraints linearly independent standard form polyhedron constraint matrix full row rank basic solutions alternatively defined following statement theorem consider polyhedron assume full row rank vector basic solution exists index set det definition basic feasible solution polyhedron basic solution also feasible convention terminology basic feasible solution problem instance understood basic feasible solution polyhedron defines feasible set instance proof two lemmas key proof presented first first lemma states problem set relaxation optimal basic feasible solutions lemma let optimal basic feasible solution holds addition denotes element proof assume feasible set nonempty otherwise basic feasible solution definition following two claims made linear combination rows exists either rows linearly independent addition cases define constraints claims together imply problem written standard form problem constraint matrix full row rank matrix minimize subject identity matrix dimension vector ones see claims first note implied feasibility set otherwise exists properties rank linearly independent rows matrix hand matrix hence define constraints shows next step proof show every basic solution entries either denote matrix first columns let square submatrix two columns rows negative det otherwise possibly row column permuted square submatrix assumed totally unimodular hence det totally unimodular next consider matrix defined denote number rows number columns respectively let set column indices square contains columns det since totally unimodular otherwise repeatedly applying laplace expansion columns columns shown det equal determinant square submatrix hence cramer rule following holds solution following system linear equations det theorem together imply nonzero entries basic solutions either therefore basic feasible solutions also basic solutions polyhedron also satisfy integrality property finally let optimal basic feasible solution feasibility nonnegativity implies minimize minimization excludes possibility optimality hence possible define second lemma concerned restricted version infinity norm bound follows minimize lemma optimization problems equivalent proof suppose feasible optimal solution denoted let row index set claimed exists common optimal solution optimal objective value argument follows property implies feasibility denoted variant replaced corollary problem standard form problem least one basic feasible solution furthermore since optimal objective value bounded zero theorem implies optimal basic feasible tion specified lemma denote feasible since also inequality true optimal solution hence optimal objective value conversely suppose infeasible also infeasible concludes equivalent proof theorem let optimal basic feasible solution exist defined lemma particular verified optimal solution following optimization problem minimize subject subject verified equivalent lemma states also equivalent consequently optimal solution implies feasible optimal objective value subject inequalities hold property also optimal solution feasible solution since holds hence optimal solution umerical emonstration demonstration instances restricted security index problem solved identity matrix empty incidence matrix describes topology one following benchmark systems ieee ieee ieee ieee polish polish benchmark solved possible values choices case choices case two solution approaches tested first approach one proposed denoted approach includes following steps set problem solve using solver cplex let optimal solution define optimal solution according theorem second solution approach standard applied also second approach referred approach formulated following problem minimize subject constant required least maximum column sum absolute values entries binary decision variables mixed integer linear programming milp problem solved standard solver cplex correctness approach direct consequence reformulation result approaches guaranteed correctly solve theory fig shows sorted security indices optimal objective values four larger benchmark systems bus bus bus bus bus approach approach bus solve time sec security indices computed using approach comparison security indices also computed using approach shown fig two figures reaffirm theory proposed approach computes security indices exactly fig fig indicates measurement systems relatively insecure exist many measurements low security indices equal bus bus bus bus security index case number fig computing security indices different benchmark systems vii onclusion ranked measurement index fig security indices using approach bus bus bus bus security index cardinality minimization problem important general difficult solve example shown paper smart grid security index problem relaxation demonstrated promise establish cases provides exact solutions results based mutual coherence rip provide sufficient conditions unique optimal solution solves cardinality minimization problem relaxation however paper identifies class application motivated problems shown solvable relaxation even though results based mutual coherence rip make assertion fact optimal solution might unique key property leads conclusion paper total unimodularity constraint matrix total unimodularity matrix leads two important consequences equivalent restricted version furthermore solved exactly solving problem thus establishing conclusion relaxation exactly solves ppendix proof equivalence ranked measurement index fig security indices using approach terms computation time performances approach much approach since milp problem much difficult solve problem size fig shows computing security indices benchmark system using approaches verifies proposed approach effective illustration computations performed dualcore windows machine cpu ram note constraint implies since consists rows identity matrix diagonal nonsingular exists diagonal nonsingular matrix particular let dkk positive scalar dkk dkk implies dkk ddd ddd addition dkk ddd finally dkk ddd applying definition change decision variable dkk shows equivalent proof proposition part trivial necessary part condition necessary rank rank meaning infeasible condition also necessary rank exist rank sufficiency part assume conditions satisfied part problem feasible hence optimal solution denoted define definition rank also rank feasible thus showing feasible show first consider case rank rank condition full column rank next consider case rank exists particular also condition implies since otherwise let note also definition construct whenever implies feasible strictly less objective value contradicting optimality therefore claim rank true implies feasible establishing sufficiency part part conditions feasible addition constructed proof sufficiency part satisfies optimal solution means optimal objective function value less equal converse suppose optimal feasibility implies exists also implies implies rank contradicting feasibility therefore exists scalar consequently feasible objective function value less equal optimal objective function value eferences abur power system state estimation marcel dekker monticelli state estimation electric power systems generalized approach kluwer academic publishers liu reiter ning false data injection attacks state estimation electric power grids acm conference computer communication security new york usa sandberg teixeira johansson security indices state estimators power networks first workshop secure control systems cpsweek sandberg stealth attacks protection schemes state estimators power systems ieee smartgridcomm bobba rogers wang khurana nahrstedt overbye detecting false data injection attacks state estimation first workshop secure control systems cpsweek kosut jia thomas tong malicious data attacks smart grid ieee transactions smart grid vol sou sandberg johansson electric power network security analysis via minimum cut relaxation ieee conference decision control december giani bitar mcqueen khargonekar poolla smart grid data integrity attacks characterizations countermeasures ieee smartgridcomm kim poor strategic protection data injection attacks power grids ieee transactions smart grid vol june sou sandberg johansson computing critical ktuples power networks ieee transactions power systems vol mallat zhang matching pursuit dictionaries ieee transactions signal processing vol chen donoho saunders atomic decomposition basis pursuit siam journal scientific computing vol teixeira dan sandberg johansson cyber security study scada energy management system stealthy deception attacks state estimator ifac world congress milan italy korres contaxis identification updating minimally dependent sets measurements state estimation ieee transactions power systems vol aug almeida asada garcia identifying critical sets state estimation using gram matrix powertech ieee bucharest ayres haley bad data groups power system state estimation ieee transactions power systems vol clements krumpholz davis power system state estimation residual analysis algorithm using network topology power apparatus systems ieee transactions vol april london alberto bretas network observability identification measurements redundancy level power system technology proceedings powercon international conference vol schrijver course combinatorial optimization cwi amsterdam netherlands online document available http tao decoding linear programming information theory ieee transactions vol tsitsiklis bertsimas introduction linear optimization athena scientific hendrickx johansson jungers sandberg sou exact solution power networks security index problem generalized min cut formulation preparation online available http stoer wagner simple algorithm acm vol july schrijver theory linear integer programming wiley hespanha naghshtabrizi survey recent results networked control systems proceedings ieee vol bemporad heemels johansson networked control systems springer donoho elad optimally sparse representation general nonorthogonal dictionaries via minimization proceedings national academy sciences vol wakin boyd enhancing sparsity reweighted minimization journal fourier analysis applications vol bruckstein donoho elad sparse solutions systems equations sparse modeling signals images siam review vol gribonval nielsen sparse representations unions bases information theory ieee transactions vol restricted isometry property implications compressed sensing comptes rendus mathematique vol wood wollenberg power generation operation control wiley sons cplex http zimmerman thomas matpower operations planning analysis tools power systems research education ieee transacations power systems vol | 5 |
accelerating learning constructive predictive frameworks successor representation mar craig marlos patrick propose using successor representation accelerate learning constructive knowledge system based general value functions gvfs settings like robotics unstructured dynamic environments infeasible model meaningful aspects system environment hand due complexity size instead robots must capable learning adapting changes environment task incrementally constructing models experience gvfs taken field reinforcement learning way modeling world predictive questions one approach models proposes massive network interconnected interdependent gvfs incrementally added time reasonable expect new incrementally added predictions learned swiftly learning process leverages knowledge gained past experience provides means separating dynamics world prediction targets thus capturing regularities reused across multiple gvfs primary contribution work show using predictions improve sample efficiency learning speed continual learning setting new predictions incrementally added learned time analyze approach demonstrate potential data physical robot arm introduction long standing goal pursuit artificial general intelligence knowledge modeling explaining world agent interaction directly agent experience particularly important fields continual learning developmental robotics expect agents capable learning dynamically incrementally interact succeed complex environments one proposed approach representing world models collection general value functions gvfs models world set predictive questions defined policy interest target signal interest policy timescale discounting schedule accumulating signal interest example gvf mobile robot could pose question much current wheels consume next second drive straight forward gvf questions typically answered using temporaldifference methods field reinforcement learning learned gvf approximates expected future value signal interest directly representing relationship environment policy timescale target signal output single predictive unit university alberta canada sherstan machado pilarski nevertheless despite success algorithms achieved recently methods answering multiple predictive questions single stream experience critical robotic setting known exhibit sample inefficiency setting interest multitudes gvfs learned incremental sample sample way problem multiplied ultimately faster agent learn approximate new gvf better paper show one accelerate learning constructive knowledge system based gvfs sharing environment dynamics across different predictors done successor representation allows learn world dynamics policy independently signal predicted empirically demonstrate effectiveness approach tabular representation robot arm uses function approximation evaluate algorithm continual learning setting possible specify gvfs priori rather gvfs added incrementally course learning key result show using learned enables agent learn newly added gvfs faster learning gvfs standard fashion without use background consider agent interacting environment sequentially use standard notation reinforcement learning literature modeling problem markov decision process starting state timestep agent chooses action according policy distribution transitions state according probability transition transition agent receives reward reward function paper focus prediction problem agent goal predict value signal current state cumulative sum future rewards note throughout paper use upper case letters indicate random variables general value functions gvfs common prediction made expected return return defined sum future discounted rewardspunder policy starting state formally discount factor final timestep denotes continuing task function encoding prediction return known value function gvfs extend notion predictions different signals environment done replacing reward signal target signal refer cumulant allowing discounting function instead using fixed discounting factor general value state policy defined average cumulant state average state probability transitioning state policy also written matrix form denotes multiplication equation solved gives identity matrix probability matrix successor representation successor representation initially proposed representation capable capturing state similarity terms time formally defined fixed words encodes expected number times agent visit particular state sum discounted matrix form importantly easily computed incrementally standard algorithms learning since primary modification replace reward signal state visitation counter nevertheless despite simplicity holds important properties leverage paper limit constant see corresponds first factor solution thus seen encoding dynamics markov chain induced policy environment transition probability agent access accurately predict discounted accumulated value signal state simply learning expected immediate value signal state hand agent use agent must also deal problem credit assignment look returns control delayed consequences note dayan describes predicting future state visitation time onward typically describe return predicting signal onward importantly dynamics encoded signals learned policy function factorization solution property use work described discount main next section iii methods aforementioned interested problem knowledge acquisition continual learning setting knowledge encoded predictive questions gvfs setting possible specify gvfs ahead time instead gvfs must added incrementally yet unknown mechanism standard approach would learn newly added prediction scratch section discuss use accelerate learning taking advantage factorization shown method leverages fact independent target signal predicted learning separately learning predict new signals previous section clarity discussed main concepts tabular case real world applications state space large assuming states uniquely identified often feasible instead generally represent states set features function feature vector easily represented using function approximation learned using algorithms order present general version algorithm introduce using function approximation notation first step algorithm compute average cumulant error use linear function approximation estimate error given note vector length generalization function approximation case known successor features use function approximation estimate using usual often used derive stochastic gradient descent update gradient respect outer product based derivation well obtain algorithm note last two lines algorithm update state required episodic case methods effect prediction target ignored computing gradient algorithm table signal primitives gvf prediction input feature representation policy discount function output matrix vectors predictors initialize arbitrarily terminal observe state take action selected according observe next state cumulants cumulant end end algorithm allows predict cumulant state using current estimate matrix weights obtain final prediction simply computing algorithm accelerates learning generally learning estimate faster learning gvf directly exactly algorithm predicting new signal starts current estimate end multiplication simply weighted average predictions across states weighted likelihood visited provide empirical evidence supporting claim next sections evaluation dayan grid world first evaluated algorithm tabular grid world simplicity allowed analyze method thoroughly since bounded speed complexity physical robots grid world used inspired dayan see figure four actions available environment left right taking action wall blue results change position transitions deterministic move direction moves agent next cell given direction except moving wall episode agent spawns location episode terminates agent reaches goal generated fifty different signals agent predict generated randomly collection primitives enumerated table composed two different primitives one axis like sigx setx biasx sigy sety biasy bias offset drawn respectively offset bias applied either unit shortest path primitives shortest path primitive combined second signal primitive fixed value square wave sin wave random binary random float unit shortest path parameters value period invert rue alse period fixed random binary string generated length axis fixed random floats generated length axis fixed value transition cost goal reward used gaussian noise standard deviation applied top signal shortest path signal inspired common reward function used transition cost negative reward meant push agent completing task timely manner reaching goal produces positive reward signal agent selects actions using action selection timestep probability uses action specified policy see figure otherwise chooses randomly four actions experiments tabular representation used grid cell uniquely represented encoding set experiments compute groundtruth predictors signal predictors done taking average return observed state reference averaged episodes signal predictor references averaged episodes episode started start state followed policy already described first evaluated predictive performance learning algorithm respect variety values report average trials episodes initialize weights squared euclidean distance calculated predicted reference timestep values summed run average taken across runs averages shown figure using results figure evaluated performance two signal prediction approaches sweeping across various experimental run learning new signal enabled incrementally every episodes produced runs total length episodes first pair gvfs added direct predictors trained episodes last added gvfs trained episodes run order signals added randomized thirty runs performed weights predictors initialized notice learned time direct predictions run cumulative mse signal calculated according equation computes total squared error predictor estimate reference predictor estimate episode error current previous episodes averaged signal maximum error given fig dayan grid world arrows indicate policy start state policy black squares indicate prediction given starting state darker square higher expected visitation notice graying around central path caused action selection mse function different values lowest error indicated markers comparison nmse direct dashed lines solid lines predictions function fixed discount factor summed across signals lowest cumulative error indicated arrows direct predictions arrows predictions note although difficult see confidence intervals included either direct predictions found used normalize errors signal across particular value see way attempt treat error signal equally done errors large magnitude signals dominate results normalized values summed across signals averages across runs plotted figure sei sei sei direct sei sei advantage method clear increases expected since methods making predictions experiment figure performs better vast majority signals shown table listed method better signals analysis cases direct method better reveal target signals small magnitudes suggesting approach may susceptible ratio analysis remains done finally analyzed prediction error systems evolve time demonstrated figure selected best plotted performance time across different runs case order signals remained fixed sensible averages could plotted signal signal performance normalized summed across table signal performance figure direct better better fig predictors learn scratch new predictors added every episodes error red right axis goes low predictors green able learn faster direct blue counterparts shading indicates confidence interval active signals expected clearly see srbased predictions green start much higher error direct blue error red drops low newly added predictors able learn quicker less peak overall error direct predictor continual learning setting never opportunity tune optimal evaluation practically fixed used many robotics settings order ensure stable learning small chosen saw figure advantage using predictions enhanced smaller ideally however would imagine fully developed system would use method adapting adadelta evaluation robot arm tabular settings like dayan grid world useful enabling analysis providing insight behavior method however goal accelerate learning fig user controls robot arm using joystick trace inside wire maze direction circuit path shown blue real robot states fully observed represented exactly instead must use function approximation demonstrate approach using robot arm learning sensorimotor predictions respect policy task user controls robot arm via joystick trace circuit inside wire maze see figure rod held robot gripper user performed task approximately minutes completing around circuits experiment used six different prediction targets current position speed shoulder rotation elbow flexion joints new predictor activated every timesteps note robot reports sensor updates demonstration discount factor used four signals used input function approximator current position decaying trace position shoulder elbow joints decaying trace joint calculated trj trj posj inputs normalized joint ranges observed experiment passed tilecoding tilings width total memory size additionally bias unit added resulting binary feature vector length maximum active features timestep hashing collisions reduce number use decaying predictors starts decays linearly zero entire dataset timestep divided number active features finally predictor offset starts first activated decays rate predictors compare prediction error compute running mse signal according timestep sum taken previous timesteps unlike previous tabular domain ideal estimator compare instead compare predictions actual return order treat signal equally normalize errors according note nmse allows compare predictions single signal two methods tell accurate predictions fig minute run tracing maze circuit new predictor added every timesteps nmse errors summed across predictors allow comparison signals sei max sei direct sei sei sei figures show single run approximately minutes length single ordering predictors used figure shows error across predictors figure separates predictor see clear advantage using predictions signals unlike previous tabular results little difference performance first predictor shoulder current even learned investigate ran experiments signal learned beginning run observed performance rarely worse sometimes even better using srbased method suggests approach robust expected experimentation needed advantages scaling paper analyzed single policies discount functions setting gvf framework proposed used rather imagined massive numbers gvfs many policies timescales used represent complex models world setting note using predictions offer additional benefits allowing robot less consider single policy collection srs learned discount functions predictors represent predictions using predictors first advantage far fewer gvfs need updated timestep saving computational costs second benefit potential reduce number weights used system example consider learning tabular setting states using linear estimators predictions number weights needed shown fixed total number weights used direct prediction approach greater new predictor demonstrated behaviour tabular grid world robot arm results suggest effective method improving learning rate sample efficiency robots learning real world several clear opportunities research topic first provide greater understanding given fixed signals better predicted directly rather work using function approximation preliminary insight yet gained setting another opportunity research explore using srbased predictions discount functions finally suggest predictions deep feature learning incrementally constructed architecture would powerful tool support continual developmental learning robotic domains widespread applications eferences fig results figure nmse individual predictors normalized vii related work idea originally introduced function approximation method however recently applied settings used instance transfer learning problems allowing agents generalize better across similar different tasks define intrinsic rewards option discovery algorithms gvfs originally proposed method building agent overall knowledge modular way date primarily used fixed policies unreal agent powerful demonstration usefulness multiple predictions auxiliary tasks viewed gvfs shown accelerate improve robustness learning finally idea closest work concept universal value functions uvfas uvfas gvfs generalization value functions however instead generalizing multiple predictors discount factors generalize value functions goals parametrized way believe result idea uvfas complementary could fact eventually combined future work viii conclusions paper showed successor representation although originally introduced another purpose used accelerate learning continual learning setting robot incrementally constructs models world collection predictions known general value functions gvfs enables given prediction modularized two components one representing dynamics environment representing target signal signal prediction allows robot reuse existing knowledge adding new prediction target speeding learning ring continual learning reinforcement environments dissertation university texas austin oudeyer kaplan hafner intrinsic motivation systems autonomous mental development ieee transactions evolutionary computation vol sutton modayil delp degris pilarski white precup horde scalable architecture learning knowledge unsupervised sensorimotor interaction proceedings international joint conference autonomous agents multiagent systems aamas sutton learning predict methods temporal differences machine learning vol sutton barto reinforcement learning introduction mit press mnih kavukcuoglu silver rusu veness bellemare graves riedmiller fidjeland ostrovski petersen beattie sadik antonoglou king kumaran wierstra legg hassabis control deep reinforcement learning nature vol silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton chen lillicrap hui sifre van den driessche graepel hassabis mastering game without human knowledge nature vol dayan improving generalization temporal difference learning successor representation neural computation vol barreto dabney munos hunt schaul silver van hasselt successor features transfer reinforcement learning advances neural information processing systems nips zeiler adadelta adaptive learning rate method corr vol modayil white sutton nexting reinforcement learning robot adaptive behavior vol kulkarni saeedi gautam gershman deep successor reinforcement learning corr vol machado rosenbaum guo liu tesauro campbell eigenoption discovery deep successor representation proceedings international conference learning representations iclr jaderberg mnih czarnecki schaul leibo silver kavukcuoglu reinforcement learning unsupervised auxiliary tasks proceedings international conference learning representations iclr schaul horgan gregor silver universal value function approximators proceedings international conference machine learning icml | 2 |
foundations declarative data analysis using limit datalog programs nov mark kaminski bernardo cuenca grau egor kostylev boris motik ian horrocks department computer science university oxford abstract motivated applications declarative data analysis study datalog extension positive datalog arithmetic functions integers language known undecidable propose two fragments limit datalog predicates axiomatised keep numeric values allowing show fact entailment combined data complexity moreover additional stability requirement causes complexity drop ime ime respectively finally show stable datalog express many useful data analysis tasks results provide sound foundation development advanced information systems introduction analysing complex datasets currently hot topic information systems term data analysis covers broad range techniques often involve tasks data aggregation property verification query answering tasks currently often solved imperatively using java scala specifying manipulate data undesirable objective analysis often obscured evaluation concerns recently argued data analysis declarative alvaro markl seo shkapsky users describe desired output rather compute example instead computing shortest paths graph concrete algorithm one describe path length select paths minimum length specification independent evaluation details allowing analysts focus task hand evaluation strategy chosen later general parallel incremental evaluation algorithms reused free essential ingredient declarative data analysis efficient language capture relevant tasks datalog prime candidate since supports recursion apart recursion however data analysis usually also requires integer arithmetic capture quantitative aspects data length shortest path research combining two dates back mumick kemp stuckey beeri van gelder consens mendelzon ganguly ross sagiv currently experiencing revival faber mazuran extensive body work however focuses primarily integrating recursion arithmetic aggregate functions coherent semantic framework technical difficulties arise due nonmonotonicity aggregates surprisingly little known computational properties integrating recursion arithmetic apart straightforward combination undecidable dantsin undecidability also carries formalisms practical datalogbased systems boom alvaro deals shkapsky myria wang socialite seo overlog loo dyna eisner filardo yedalog chin develop sound foundation declarative data analysis study datalog datalog integer arithmetic comparisons main contribution new limit datalog fragment like existing data analysis languages powerful flexible enough naturally capture many important analysis tasks however unlike datalog existing languages reasoning limit programs decidable becomes tractable data complexity additional stability restriction limit datalog intensional predicates numeric argument limit predicates instead keeping numeric values given tuple objects predicates keep minimal min maximal max bounds numeric values entailed tuple example encode weighted directed graph using ternary predicate edge rules min limit predicate compute cost shortest path given source node every node edge rules dataset entail fact cost shortest path hence holds since cost shortest path also rule intuitively says reachable cost edge cost reachable cost different datalog implicit semantic nection semantic connections allow prove decidability limit datalog provide direct semantics limit predicates based herbrand interpretations also show semantics axiomatised standard datalog formalism thus seen fragment datalog inherits properties monotonicity existence least fixpoint model dantsin contributions follows first introduce limit datalog programs argue naturally capture many relevant data analysis tasks prove fact entailment limit datalog undecidable restricting use multiplication becomes combined data complexity respectively achieve tractability data complexity important robust behaviour large datasets additionally introduce stability restriction show prevent expressing relevant analysis tasks proofs results given appendix paper preliminaries section recapitulate definitions datalog integers call datalog syntax vocabulary consists predicates objects object variables numeric variables predicate integer arity position either object numeric sort object term object object variable numeric term integer numeric variable form numeric terms standard arithmetic functions constant object integer magnitude integer absolute value standard atom form predicate arity term whose type matches sort position comparison atom form standard comparison predicates vand vand numeric terms rule form standard atoms comparison atoms variable occurs atom head standard body comparison body body ground instance obtained substituting variables constants datalog program finite set rules predicate intensional idb occurs head rule whose body empty otherwise extensional edb term atom rule program ground contains variables fact ground standard atom program dataset fact often say contains fact write actually means write tuple terms often treat conjunctions tuples sets write say semantics herbrand interpretation necessarily finite set facts satisfies ground atom written standard atom evaluating arithmetic functions produces fact comparison atom evaluating arithmetic functions comparisons produces true notion satisfaction extended conjunctions ground atoms rules programs logic rule universally quantified model program entails fact written holds whenever complexity paper study computational properties checking combined complexity assumes part input contrast data complexity assumes given program dataset part input fixed unless otherwise stated numbers input coded binary size kpk size representation checking undecidable even arithmetic function dantsin presburger arithmetic logic constants functions equality comparison predicates interpreted integers complexity checking sentence validity whether sentence true models presburger arithmetic known number quantifier alternations number variables quantifier block fixed berman haase limit programs towards introducing decidable fragment datalog data analysis first note undecidability proof plain datalog outlined dantsin uses atoms least two numeric terms thus motivate introducing fragment first prove undecidability holds even atoms contain one numeric term proof uses reduction halting problem deterministic turing machines ensure standard atom one numeric term combinations time point tape position encoded using single integer theorem datalog program fact checking undecidable even contains standard atom one numeric term next introduce limit datalog limit predicates keep bounds numeric values language seen either semantic syntactic restriction datalog definition limit datalog predicate either object predicate numeric positions numeric predicate last position numeric numeric predicate either ordinary numeric predicate limit predicate latter either min max predicate atoms object predicates object atoms analogously types datalog rule limit datalog rule atom object ordinary numeric limit atom object limit atom limit datalog program program containing limit rules homogeneous contain min max predicates rest paper make three simplifying assumptions first numeric atoms occurring rule body comparison atoms head contain arithmetic functions second numeric variable rule occurs one standard body atom third distinct rules program use different variables third assumption clearly variables universally quantified names immaterial moreover first two assumptions well since rule exists logically equivalent rule satisfies assumptions particular replace atom conjunction fresh variable fresh predicate axiomatised hold integers follows also replace atoms conjunction fresh variable intuitively limit fact says value tuple objects least max min example fact shortest path example section says node reachable via path cost capture intended meaning require interpretations closed limit whenever contains limit fact also contains facts implied according predicate type example captures observation existence path cost implies existence path cost definition interpretation limit fact min resp max predicate holds integer resp interpretation model limit program notion entailment modified take account models semantics limit predicates limit datalog program axiomatised explicitly extending following rules fresh predicate thus limit datalog seen syntactic fragment datalog min predicate max predicate limit program reduced homogeneous program however sake generality technical results require programs homogeneous proposition limit program fact homogeneous program fact computed linear time intuitively program proposition obtained replacing min max predicates fresh max resp min predicates negating numeric arguments section shown limit datalog compute cost shortest paths graph next present examples data analysis tasks formalism handle examples assume objects input arranged arbitrary linear order using facts first next next use order simulate aggregation means recursion example consider social network agents connected follows relation agent introduces tweets message agent retweets message least kai agents follows tweet message kai positive threshold uniquely associated goal determine agents tweet message eventually achieve using limit datalog encode network structure dataset dtw containing facts follows follows ordinary numeric facts kai threshold kai program ptw containing rules encodes message propagation max predicate follows first follows first next next follows specifically ptw dtw iff tweets message intuitively true agents according order least agents follows tweet message rules initialise first agent order max predicate first agent tweets message rule overrides rule rules recurse order compute stated example limit datalog also solve problem counting paths pairs nodes directed acyclic graph encode graph obvious way dataset dcp uses object predicates node edge program pcp consisting rules max predicates counts paths node node node first edge first next next edge specifically pcp dcp iff least paths exist node node intuitively true least sum number paths according order exists edge rule says node one path rule initialises aggregation saying first node zero paths rule overrides exists edge finally rule propagates sum next order rule overrides edge adding number paths example assume graph example node associated bandwidth bai limiting number paths going bai count paths compliant bandwidth requirements extend dcp dataset dbcp additionally contains ordinary numeric fact bai node define pbcp replacing rule pcp following rule pbcp dbcp iff exist least paths node node bandwidth requirement satisfied nodes path fixpoint characterisation entailment programs often grounded eliminate variables thus simplify presentation limit datalog however numeric variables range integers grounding infinite thus first specialise notion grounding definition rule variable numeric variable occurs limit body atom limit program rules contains rule obtained replacing variable occurring numeric argument limit atom constant obviously semigrounding next characterise entailment limit programs compactly represent interpretations interpretation contains min predicate either limit value exists holds dually max predicate thus characterise value tuple objects need limit value information value exists definition set facts integers extended special symbol holds limit facts interpretations correspond naturally recast notions satisfaction model using unlike interpretations number facts limit program bounded definition interpretation corresponds contains exactly object ordinary numeric facts limit predicate tuple objects integer resp min resp max predicate let corresponding interpretations satisfies ground atom written program written finally holds example let interpretation consisting ordinary numeric predicate max predicate objects corresponding next introduce immediate consequence operator limit program assume simplicity apply rule correctly handling limit atoms operator converts linear integer constraint captures ground instances applicable interpretation corresponding solution applicable otherwise added limit atom min max atom minimal maximal solution computed updated limit value least application keeps best limit value definition limit program conjunction comparison atoms containing object ordinary numeric atom exists limit atom exists iii resp min resp max atom rule applicable integer solution assume applicable object ordinary numeric atom let min resp max atom optimum value opt smallest resp largest value solutions bound value solutions exists moreover opt operator maps smallest pseudointerpretation satisfying applicable finally tnp example let max predicates solution therefore rule applicable empty moreover conjunction two therefore rule applicable finally max predicate opt max consequently lemma limit program operator monotonic moreover monotonicity ensures existence closure least tnp following theorem characterises entailment provides bound number facts closure theorem limit program fact also implies proofs first third claim theorem use monotonicity analogously plain datalog second claim holds since pair distinct facts tnp must derived distinct rules decidability entailment start investigation computational properties limit datalog theorem bounds cardinality closure program bound magnitude integers occurring limit facts fact integers arbitrarily large moreover due multiplication checking rule applicability requires solving nonlinear inequalities integers undecidable theorem limit program fact checking checking applicability rule undecidable proof theorem uses straightforward reduction hilbert tenth problem checking rule applicability undecidable due products variables inequalities however linear inequalities prohibit multiplying variables problem solved polynomial time bound number variables thus ensure decidability next restrict limit programs contain linear numeric terms examples satisfy restriction definition limit rule numeric term form distinct numeric variable occurring limit body atom term contains variable occurring limit body atom iii term constructed using multiplication integers variables occurring limit body atoms program contains rest section show entailment limitlinear programs decidable provide tight complexity bounds upper bounds obtained via reduction validity presburger formulas certain shape lemma program wnand fact exists presburger sentence valid conjunction possibly negated atoms moreover bounded polynomially kpk number bounded polynomially exponentially krk finally magnitude integer bounded maximal magnitude integer reduction lemma based three main ideas first limit atom program use boolean variable def indicate atom form exists boolean variable fin indicate whether value finite integer variable val capture finite second rule encoded universally quantified presburger formula replacing standard atom encoding finally entailment encoded sentence stating every either rule satisfied holds requires universal quantifiers quantify models existential quantifiers negate universally quantified program lemma bounds magnitude integers models presburger formulas lemma bounds follow recent deep results sets connection presburger arithmetic chistikov haase note limit program normalised polynomial time program thank christoph haase providing proof lemma lemma let presburger sentence conjunction possibly negated atoms size mentioning variables maximal magnitude integer valid valid models integer variable assumes value whose magnitude bounded log lemmas provide bounds size entailment theorem program dataset fact pseudomodel exists magnitude integer bounded polynomially largest magnitude integer exponentially krk theorem following nondeterministic algorithm decides compute guess satisfies bounds given theorem return true step requires exponential polynomial data time increase maximal size rule hence step nondeterministic exponential polynomial data step requires exponential polynomial data time solve system linear inequalities theorem proves bounds correct tight theorem program fact deciding combined data complexity upper bounds theorem follow theorem data complexity shown reduction square tiling problem combined complexity shown similar reduction succinct version square tiling tractability entailment stability tractability data complexity important large datasets next present additional stability condition brings complexity entailment ime combined ime data complexity plain datalog cyclic dependencies limit programs fixpoint plain datalog program computed ime data complexity however program computation may terminate since repeated application produce larger larger numbers thus need way identify numeric argument limit fact grows decreases without bound moreover obtain procedure tractable data complexity divergence detected polynomially many steps example illustrates achieved analysing cyclic dependencies example let contain facts rules max predicates applying first rule copies value applying second rule increases value thus diverge existence cyclic dependency however necessarily lead divergence let program obtained adding max fact replacing first rule cyclic dependency still exists increase values bounded value independent thus neither diverge rest section extend defining integer formalise cyclic dependencies follows definition limit predicate tuple objects let vbb node unique value propagation graph limitlinear program directed weighted graph gjp defined follows limit fact vbb rule applicable head form vaa body atom vbb variable occurs term hvbb vaa said produce edge hvbb vaa edge hvbb vaa produced opt otherwise opt max max min min min max opt weight edge given max produces cycle gjp cycle sum weights contributing edges greater intuitively gjp describes limit predicate objects operator propagates facts presence node vbb indicates holds uniquely identified given vbb edge hvbb vaa indicates least one rule applicable occurs moreover applying produces fact satisfies max predicates analogously types words edge indicates application propagate value vbb vaa increasing least thus presence cycle gjp indicates repeated rule applications might increment values nodes cycle stable programs example shows presence cycle gjp imply divergence atoms corresponding nodes cycle weight cycle may decrease certain rule applications longer positive motivates stability condition edge weights gjp may grow never decrease rule application hence weight cycle becomes positive remain positive thus guarantee divergence atoms corresponding nodes intuitively stable whenever rule applicable rule also applicable larger limit values applying increases value head definition defines stability condition gjp please note gjp gjp corresponding value propagation graphs definition program stable gjp gjp hvbb vaa imply program stable stable example program example stable hva hva program integer tnp stability ensures edge weights grow rule application thus recursive application rules producing edges involved cycle leads divergence shown following lemma lemma stable program node vaa cycle gjp algorithm uses observation deterministically compute fixpoint algorithm iteratively applies however step computes corresponding value propagation graph line node vaa occurs cycle line replaces line lemma sound moreover since algorithm repeatedly applies necessarily derives fact eventually finally lemma shows algorithm terminates time polynomial number rules program intuitively proof lemma shows without introducing new edge new positive weight cycle value propagation graph repeated application necessarily converges steps moreover number edges gjp quadratic new edge new positive weight cycle introduced many times lemma applied stable program algorithm terminates iterations loop lines lemmas imply following theorem theorem stable program dataset fact algorithm decides time polynomial exponential krk since running time exponential maximal size rule increase rule sizes algorithm entailment stable programs input stable program fact output true repeat gjp vaa cycle gjp replace return true false otherwise algorithm combined preprocessing step provides exponential time decision procedure stable programs upper bound tight since entailment plain datalog already combined data complexity first condition definition ensures variable occurring numeric term contributes value term example disallows terms since rule term head may violate second condition moreover second condition definition ensures value numeric variable occurring head increases type body atom introducing increases occurs max body atom decreases otherwise value numeric term head essential first condition stability definition finally third condition definition ensures comparisons invalidated increasing values variables involved required conditions stability type consistency purely syntactic condition checked looking one rule one atom time hence checking type consistency feasible pace proposition program stable theorem stable program fact checking combined imecomplete data complexity proposition checking whether program accomplished pace programs unfortunately class stable programs recognisable shown reduction hilbert tenth problem proposition checking stability program undecidable next provide sufficient condition stability captures programs examples intuitively definition syntactically prevents certain harmful interactions second rule program example numeric variable occurs max atom lefthand side comparison atom thus rule applicable value necessarily applicable breaks stability definition rule typeconsistent numeric term form integer nonzero integer called coefficient variable limit atom variable occurring positive resp negative coefficient also occurs unique limit body atom resp different type min max comparison variable occurring positive resp negative coefficient also occurs unique min resp max body atom variable occurring positive resp negative coefficient also occurs unique max resp min body atom program rules moreover program program obtained first simplifying numeric terms much possible conclusion future work introduced several fragments datalog integer arithmetic thus obtaining sound theoretical foundation declarative data analysis see many challenges future work first formalism extended aggregate functions certain forms aggregation simulated iterating object domain examples section solution may cumbersome practical use relies existence linear order object domain strong theoretical assumption explicit support aggregation would allow formulate tasks ones section intuitively without relying ordering assumption second unclear whether integer constraint solving strictly needed step algorithm may possible exploit stability compute efficiently third shall implement algorithm apply practical data analysis problems fourth would interesting establish connections results existing work artefact systems damaggio koutsos vianu faces similar undecidability issues different formal setting acknowledgments thank christoph haase explaining results presburger arithmetic sets well providing proof lemma work also benefited discussions michael benedikt research supported royal society epsrc projects dbonto references alvaro peter alvaro tyson condie neil conway khaled elmeleegy joseph hellerstein sell sears boom analytics exploring declarative programming cloud eurosys acm beeri catriel beeri shamim naqvi oded shmueli shalom tsur set constructors logic database language log berman leonard berman complexitiy logical theories theor comput byrd richard byrd alan goldman miriam heller recognizing unbounded integer programs oper chin brian chin daniel von dincklage vuk ercegovac peter hawkins mark miller franz josef och christopher olston fernando pereira yedalog exploring knowledge scale snapl chistikov haase dmitry chistikov christoph haase taming set icalp consens mendelzon mariano consens alberto mendelzon low complexity aggregation graphlog datalog theor comput damaggio elio damaggio alin deutsch victor vianu artifact systems data dependencies arithmetic acm trans database dantsin evgeny dantsin thomas eiter georg gottlob andrei voronkov complexity expressive power logic programming acm comput eisner filardo jason eisner nathaniel wesley filardo dyna extending datalog modern datalog faber wolfgang faber gerald pfeifer nicola leone semantics complexity recursive aggregates answer set programming artif ganguly sumit ganguly sergio greco carlo zaniolo extrema predicates deductive databases comput syst erich subclasses presburger arithmetic hierarchy theor comput haase christoph haase subclasses presburger arithmetic weak exp hierarchy hougardy stefan hougardy algorithm graphs negative cycles inf process kannan ravi kannan minkowski convex body theorem integer programming math oper kemp stuckey david kemp peter stuckey semantics logic programs aggregates islp koutsos vianu adrien koutsos victor vianu views business artifacts comput system loo boon thau loo tyson condie minos garofalakis david gay joseph hellerstein petros maniatis raghu ramakrishnan timothy roscoe ion stoica declarative networking commun acm markl volker markl breaking chains declarative data analysis data independence big data era pvldb mazuran mirjana mazuran edoardo serra carlo zaniolo extending power datalog recursion vldb mumick inderpal singh mumick hamid pirahesh raghu ramakrishnan magic duplicates aggregates vldb pages papadimitriou christos papadimitriou complexity integer programming acm ross sagiv kenneth ross yehoshua sagiv monotonic aggregation deductive databases comput system uwe complexity presburger arithmetic fixed quantifier dimension theory comput seo jiwon seo stephen guo monica lam socialite efficient graph query language based datalog ieee trans knowl data shkapsky alexander shkapsky mohan yang matteo interlandi hsuan chiu tyson condie carlo zaniolo big data analytics datalog queries spark sigmod acm van gelder allen van gelder semantics aggregation pods von zur gathen sieveking joachim von zur gathen malte sieveking bound solutions linear integer equalities inequalities proc ams wang jingjing wang magdalena balazinska daniel halperin asynchronous recursive datalog evaluation engines pvldb proofs section theorem datalog program fact checking undecidable even contains standard atom one numeric term proof prove claim presenting reduction halting problem deterministic turing machines empty tape let arbitrary deterministic turing machine finite alphabet containing blank symbol finite set states containing initial state halting state transition function assume works tape infinite right starts empty tape head positioned leftmost cell never moves head left edge tape encode time point using integer index tape positions using integers thus time position necessarily empty encode combination time point tape position using single integer use idea encode state execution using following facts num true positive number time true encodes time point tape says symbol occupies position tape time defined pos says head points position tape time state says machine state time halts propositional variable saying machine halted next give datalog program simulates behaviour empty tape represent alphabet symbol using object constant represent state using object constant furthermore abbreviate finally abbreviate conjunction disjunction strictly speaking disjunctions allowed rule bodies however rule disjunction body form corresponds rules use former form sake clarity considerations mind program contains rules num num num time time time tape pos state state halts time tape pos tape time num tape moreover alphabet symbol states direction contains rules time state tape pos tape time state tape pos state time state tape pos num pos time state tape pos num pos rules initialise num holds positive integers rules initialise time holds integer rules initialise state time rule derives halts point turing machine enters halting state remaining rules encode evolution state based following idea variable encodes time point using value variable encodes position time point holds moreover position time point encoded obtained encodings positions obtained respectively since goal prove undecidability using simulate subtraction looking value observations mind one see rule copies unaffected part tape time point time point moreover rule pads tape filling location blank symbol since division supported language express condition finally rule updates tape position head rule updates state rules move head left right respectively consequently halts halts empty tape proposition limit program fact homogeneous program fact computed linear time proof let arbitrary limit program without loss generality construct program containing max predicates min predicate let fresh max predicate uniquely associated construct modifying rule follows min predicate replace head body atom min predicate variable replace atom fresh variable replace occurrences rule body atom min predicate integer replace atom finally min fact let otherwise let consider arbitrary interpretation let interpretation obtained replacing min fact straightforward see thus proofs section use standard notion partial mappings variables constants formula substitution formula obtained replacing free variable defined proposition rule mapping variables integers integer solution proof assume integer solution consider atom show holds comparison atom claim straightforward due object atom ordinary numeric atom ground otherwise would hold could solution max atom since either integer former case since solution since max predicate holds latter case holds due min atom proof analogous previous case proof direction analogous omit sake brevity definition given interpretation program let ground instance rule fact let let inp let lemma program operator monotonic moreover interpretation implies proof operator standard immediate consequence operator datalog applied program obtained extending rules section encoding semantics limit predicates thus claims lemma hold usual way dantsin lemma interpretation corresponding limit program interpretation corresponds proof suffices show fact following claims hold object fact ordinary numeric fact limit fact form integer limit fact form claim consider arbitrary object fact form proof ordinary numeric facts analogous assume rule grounding exist head must since holds well proposition ensures solution moreover thus assume exist rule integer solution proposition ensures holds well thus claim consider arbitrary max fact form proof min fact analogous assume rule grounding exist holds well proposition ensures solution moreover opt therefore opt assume exist rule integer solution opt proposition ensures holds well thus holds fact claim consider arbitrary max fact form proof min fact analogous following let assume program contains finitely many rules infinitely many facts produced rule infinite sequence groundings proposition ensures satisfies therefore opt holds assume rule exists opt infinite sequence solutions exists proposition ensures well thus exists therefore consequently holds lemma limit program operator monotonic moreover proof immediate lemmas theorem limit program fact also implies proof inductively applying lemma interpretation inp clearly corresponds tnp thus also correspond object ordinary numeric facts consider arbitrary max predicate tuple objects consider following cases implies tnp tnp finally least fixpoint holds well exists max exists inp least fixpoint finally tnp holds operator exists inp tnp holds tnp holds well analogous reasoning holds min predicates corresponds first third claim theorem follow straightforwardly lemma moreover contain one fact per combination limit predicate tuple objects corresponding arity program rule produces one fact implies second claim theorem proofs section theorem limit program fact checking checking applicability rule undecidable proof present reduction hilbert tenth problem determine whether given polynomial variables equation integer solutions well known problem remains undecidable even solutions must nonnegative integers use variant proof polynomial let program containing rules unary min predicate nullary object predicate obvious nonnegative integer solution moreover rule applicable nonnegative integer solution although presburger arithmetic propositional variables clearly axiomatised using numeric variables hence rest section use propositional variables presburger formulas sake clarity definition object predicate ordinary numeric predicate limit predicate objects integer let def def bak def fin distinct propositional variables let val distinct integer variable moreover let resp max resp min predicate program pres pres presburger formula pres numeric variables obtained replacing atom encoding pres defined follows pres comparison atom pres def object atom form pres def bak ordinary numeric atom form pres def val limit atom form let let assignment boolean integer variables corresponds following conditions hold specified integer def true def bak true def true exists fin true val note definition ranges integers excludes val equal integer thus contain thus implies fin false also note assignment corresponds precisely one however corresponds infinitely many assignments since definition restrict value variables def def bak def fin val moreover two assignments corresponding may differ value val fin set false assignments differ values fin val def set false assignments lemma let let variable assignment corresponds pres ground atom pres rule proof claim consider possible forms comparison atom truth independent claim immediate object fact pres def def true def claim holds ordinary numeric fact proof analogous case object facts limit fact either integer exists either way def true holds moreover fin false holds former val holds latter case thus pres clearly holds converse direction analogous omit sake brevity claim let arbitrary rule let interpretation corresponding definition latter equivalent ground instance semantics universal quantification logic latter claim equivalent ground instance note construction pres pres atom grounding thus pres pres finally groundings equivalently seen variable assignments universally quantified numeric variables pres claim follows immediately claim lemma program fact exists presburger sentence valid conjunction possibly negated atoms moreover bounded polynomially kpk number bounded polynomially exponentially krk finally magnitude integer bounded maximal magnitude integer proof lemma immediately implies sentence pres valid contains variables def def bak def fin val occurring pres pres clearly polynomially bounded kpk magnitude integer bounded maximum magnitude integer let sentence obtained converting conjunct pres form cnf formulae equivalent form pres rule integer exponentially bounded kri linearly bounded kri moving quantifiers front formula pushing negations inwards finally obtain formula form formula required form bounded polynomially kpk number bounded polynomially exponentially krk bounded linearly kpk lemma let presburger sentence conjunction possibly negated atoms size mentioning variables maximal magnitude integer valid valid models integer variable assumes value whose magnitude bounded log proof let seen system linear inequalities maximal magnitude numbers bounded proposition chistikov haase adapted work von zur gathen sieveking set solutions represented set magnitude integers bounded log consequently disjunction corresponds set log magnitude integer still bounded formula corresponds projection variables set form projection projection theorem chistikov haase implies satisfying assignments formula represented set magnitude integer bounded log since satisfying assignment satisfying assignment involving numbers follows satisfiable satisfiable models absolute value every integer variable bounded implies claim lemma since valid unsatisfiable theorem program dataset fact exists magnitude integer bounded polynomially largest magnitude integer exponentially krk proof direction trivial direction assume holds let obtained removing fact unify atom clearly let presburger sentence lemma sentence valid satisfies following conditions number polynomial turn bounded krk moreover contains facts unify atoms bounded namely linearly product kpk number linear product size hence number variables linear let maximal magnitude integer thus well lemma assignment exists magnitude integer variable bounded log clearly polynomial exponential required moreover clearly pres pres let corresponding lemma construction magnitude integer bounded furthermore let restriction facts unify head least one rule clearly still finally holds construction implies claim lemma program dataset exists polynomial computed nondeterministic polynomial time kpk kdk kjk deterministic krk polynomial time kpk kdk kjk proof let rule applicable program therefore semiground well thus rule contribute one fact definition smallest compute set containing object ordinary numeric fact fact limit predicate fact min resp max predicate implies resp complete proof lemma next argue set computed within required time bounds consider arbitrary rule let subset containing facts unify body atom note krk rule applicable conjunction integer solution construction linear krk number variables linear krk magnitude integer exponentially bounded krk kjk checking whether integer solution krk kjk ime kjkp krk polynomial argue next first consider former claim let maximal magnitude integer conjunction contains numbers whose magnitude respectively thus moreover results papadimitriou show exists polynomial magnitude integer solution bounded krk exists polynomial binary representation thus requires krk kjk bits guess polynomial time next consider latter claim theorem kannan checking satisfiability fixedparameter tractable number variables exists polynomial solution computed time krk kjk since krk clearly holds exists polynomial satisfiability checked time thus krk assume applicable object atom assume limit atom argue opt computed within required time bounds using following two steps depending whether min max predicate check whether value solutions check whether integer linear program subject bounded byrd showed amounts checking boundedness corresponding linear relaxation turn reduced checking linear feasibility solved deterministic polynomial time krk kjk problem bounded compute optimal solution reduced polynomially many krk kjk feasibility checks shown papadimitriou corollary binary search feasibility check krk kjk ime kjkp krk thus computed nondeterministic polynomial time krk kjk deterministic polynomial time kjkp krk implies claim lemma deciding data complexity program fact proof instance square tiling problem given integer coded unary set tiles two compatibility relations problem determine whether exists tiling square holds holds known thus prove claim lemma reduce complement problem presenting fixed program ptiling dataset depends showing solution ptiling nosolution encoding uses object edb predicates succ incompatibleh incompatiblev ordinary numeric edb predicates shift tileno numtiles maxtiling nullary object idb predicate nosolution unary min idb predicate unary max idb predicate tiling program ptiling contains rules abbreviates tiling tiling numtiles shift tileno succ shift tileno incompatibleh tiling tiling numtiles shift tileno succ shift tileno incompatiblev tiling tiling maxtiling nosolution dataset contains facts fresh objects distinct objects sponding tiles since coded unary although numbers exponential computed polynomial time represented using polynomially many bits numtiles tileno incompatibleh incompatiblev maxtiling succ shift reduction uses following idea facts associate tile integer hence rest discussion distinguish tile number allows represent tiling using number thus given number encodes tiling number corresponds tile assigned position integers thus numeric variable assigned encoding tiling numeric variable assigned factor corresponding position conjunction tileno true assigned tile object corresponding position tiling encoded complete construction represent position pair objects associated corresponding factor using facts facts provide ordering allows identify adjacent positions finally fact records maximal number encodes tiling outlined earlier program ptiling simply checks tilings rule ensures tiling encoded checked moreover tiling holds rules derive tiling either horizontal vertical compatibility requirement violated tiling encoded finally rule detects solution exists tiling derived lemma deciding program fact proof present reduction succinct square tiling problem instance problem given integer coded unary set containing tiles horizontal vertical compatibility relations respectively proof lemma however objective tile square positions known imecomplete thus prove claim lemma reduce complement problem presenting program showing solution nosolution main idea behind reduction similar lemma program contains rules associate tile number using ordinary numeric predicate tileno encode horizontal vertical incompatibility relations using object predicates incompatibleh incompatiblev tileno incompatibleh incompatiblev main difference lemma order obtain polynomial encoding represent position grid explicitly using pair objects instead encode position using pair objects read representing numbers respectively seen binary number slight abuse notation often identify tuple number encodes use tuples arithmetic expressions positions encoded using bits also need ensure distance positions requires bits rest proof stand tuples respectively whose length often implicit context tuples occur similarly tuples distinct variables whose length also clear context axiomatise ordering numbers bits program contains rules unary object predicate succ object predicate succ object predicate rules ensure succ encode numbers bits particular rule encodes binary incrementation holds position zeros ones rules ensure analogous property succ numbers bits succ succ analogously proof lemma encoded tilings using numbers compute maximum number encoding tiling program contains rules maxtiling unary min predicate auxt min predicate auxiliary rules multiply many times grid positions auxt position consequently rule ensures maxtiling auxt auxt succ auxt auxt succ auxt auxt maxtiling unlike proof lemma include shift factors explicitly since would make encoding exponential moreover could precompute shift factors using rules similar would need use values limit predicates multiplication would produce program therefore check tilings using different approach proof lemma construction ensures tiling tiling satisfy compatibility relations given tiling encoded position let program contains rules shiftedtiling max predicate arity unary min predicate rules ensure tiling tiling shiftedtiling understand achieved order grid positions follows consider arbitrary position successor ordering encoding tiling using integer ensures holds number tile assigns position thus rule ensures position satisfies mentioned property rule handles adjacent positions form rule handles adjacent positions form tiling shiftedtiling shiftedtiling succ shiftedtiling shiftedtiling succ shiftedtiling note position thus since shiftedtiling max predicate limit value shiftedtiling always correspond limit value tiling checking horizontal compatibility easy checking vertical compatibility requires dividing would make reduction exponential hence checks compatibility using rules conflict max predicate arity rules ensure tiling tiling position precedes distance ordering labelled tile conflict end assume labelled tile predecessor exists rule says position preceding position left labelled moreover predecessor exists rule says position preceding position labelled moreover rule propagates constraints position reducing distance one rule positions shiftedtiling succ incompatibleh tileno conflict shiftedtiling succ incompatiblev tileno conflict shiftedtiling succ conflict succ conflict shiftedtiling succ conflict succ conflict program also contains rules invalid max predicate rules ensure tiling tiling exists position comes position order satisfy compatibility relations horizontal vertical successor invalid rule determines invalidity position conflicts zero distance rules propagate information preceding positions analogously rules conflict tileno invalid shiftedtiling succ invalid invalid shiftedtiling succ invalid invalid finally program contains rules tiling unary max predicate nosolution nullary predicate rule ensures tiling encoded checked based discussion previous paragraph invalid tiling tiling invalid moreover invalid holds rule ensures tiling encoded considered atom needed rule since numeric variable allowed occur one standard body atom exhaust available tilings rule determines solution exists proof lemma tiling invalid tiling tiling tiling maxtiling nosolution based discussion consequences conclude instance succinct tiling problem solution nosolution proposition fact proof consider arbitrary corresponding interpretation exists fact implies moreover exists fact since implies theorem program fact deciding combined npcomplete data complexity proof lemmas prove hardness moreover following nondeterministic algorithm decides time polynomial kdk exponential kpk kdk compute guess signature number facts absolute values integers bounded theorem check return false return false true otherwise correctness algorithm follows theorem next argue complexity mentioned data complexity holds following observations step time required compute polynomial kdk constant since polynomial kdk constant krk constant kdk magnitude integers exponentially bounded kdk theorem thus number bits needed represent integer polynomial kdk furthermore polynomial kdk constant thus guessed step nondeterministic polynomial time kdk lemma checking amounts checking lemma computed deterministic polynomial time kjk kdk hence kdk kjk polynomial kdk hence step requires deterministic polynomial time kdk proposition step amounts checking done time polynomial kjk hence polynomial kdk well finally mentioned combined complexity holds following observations step time required compute exponential kpk kdk constant since exponential kpk kdk constant krk linear kpk constant kdk magnitude integers doubly exponentially bounded kpk kdk theorem thus number bits needed represent integer exponential kpk kdk furthermore exponential kpk kdk constant thus guessed step nondeterministic exponential time kpk kdk lemma checking amounts checking lemma polynomial exists computed deterministic polynomial time kdk kjkp krk turn bounded kdk krk hence step requires deterministic exponential time kpk kdk proposition step amounts checking done time polynomial kjk hence time exponential kpk kdk proofs section arbitrary value propagation graph gjp path gjp nonempty sequence nodes hvi holds starts ends define moreover slight abuse notation sometimes write identify set nodes path simple nodes distinct path cycle definition given limit linear program value propagation graph gjp path gjp weight defined hvi lemma let stable program let let gjp let vaa vbb nodes vab reachable vba path max predicates min predicate max predicate min predicates max predicate min predicate proof consider case max predicates remaining cases analogous proceed induction length base case empty immediate inductive step assume vaa path starting vbb ending node vcc exists edge hvcc vaa produced rule variable occurring grounding hvcc vaa next consider case max predicate case min predicate analogous let following possibilities opt integers opt opt definition definition fact stable moreover opt definition arbitrary consider following two cases inductive hypothesis holds thus consequently opt opt opt holds moreover implies lemma thus proposition definition imply clearly moreover implies lemma thus opt proposition definition imply lemma stable program node vaa cycle gjp proof let gjp let let assume sake contradiction exist cycle node vaa rule applicability monotonic still cycle gjp since stable consider case max predicate remaining case analogous vaa implies moreover implies lemma implies moreover implies either integer larger either way contradicts assumption lemma applied stable program algorithm terminates iterations loop lines proof limit predicate objects let val max predicate val min predicate moreover let set containing rule applicable form monotonicity datalog moreover edge hvbb vaa generated rule definition ensures following property holds val val prove lemma first show following auxiliary claim claim determining tnp value propagation graph gjp determining value propagation graph gjp set nodes node vaa val holds node vbb occurs gjp cycle vaa val val holds simple path gjp ends vaa satisfies node vbb exists path gjp starts vaa ends vbb one following holds val val node vcc path gjp starting vcc ending vaa node vcc proof arbitrary prove claim induction base case consider arbitrary set vertex vaa satisfy properties distinguish two cases exists edge hvbb vaa val val either vbb vbb vaa holds case path vba vaa would simple path gjp would contradict property next show vbb vaa impossible sake contradiction assume vbb vaa holds thus val val property implies val val hence consequently path cycle gjp property val turn contradicts property consequently vbb since assumption val val part claim holds vcc vbb edge hvbb vaa val val rule generates edge hvbb vaa property ensures val val since val max val rule exists satisfies val val generate edge ending vaa clearly form applicable holds moreover hence contain variable variable would occur limit body atom would generate edge consequently ground finally applicable val val val contradicts property consequently part claim holds vcc vaa inductive step assume holds set node vaa consider arbitrary set vertex vaa satisfy properties property exists rule val val val generate edge exactly way base case conclude part claim holds vcc vaa consequently rest proof assume generates least one edge let let gjp property val val val exists edge hvbb vaa val val furthermore since val val vba equal vaa path vaa vaa would cycle containing vaa contradicts property hence vbb vaa path vbb vaa simple vbb holds since generates val val property val val part claim holds vcc vbb therefore rest proof assume vbb distinguish two cases vbb reachable vaa next show set vaa node vbb satisfy properties inductive hypothesis property note since vbb direct predecessor vaa gjp simple path gjp ends vbb involve vaa extended simple path vaa ends vaa thus max simple path gjp ending vbb vaa max simple path gjp ending vaa property vaa ensures max simple path gjp ending vaa turn implies max simple path gjp ending vbb vaa property holds vaa moreover exists path vbb vaa via edge property also holds set vaa node vbb property vbb vaa property val val already established vaa vbb moreover properties depend vaa thus apply inductive hypothesis conclude one following holds val val holds node vcc vaa path gjp starts vcc ends vbb holds node vcc true case claim holds since thus next assume case holds show part claim holds vcc vaa first show vcc vaa contradiction assume vcc vaa val val moreover since generates property property val val val val consequently val val moreover val val holds since monotonic holds since stable observations val val vaa cycle gjp val contradicts property thus vcc val val val val val conclude val val case vcc vaa since vaa part claim holds vcc vaa vbb reachable vaa gjp property vbb reachable gjp node otherwise vbb would also reachable gjp vaa via node thus simple path gjp ending vbb involves vaa node path extended simple path ending vaa property ensures max path gjp ending vaa implies max path gjp ending vbb thus property inductive hypothesis holds set node vbb moreover property holds vacuously properties already established vbb properties hold assumption thus apply inductive hypothesis vbb one following holds val val node vcc path gjp starts vcc ends vbb node vcc gjp clearly trivially false holds case claim holds since note simple path bounded number nodes turn bounded therefore claim ensures one following holds val node vcc occurs positive weight cycle value fact correp sponding vcc set next iteration main loop algorithm contains least one edge occur tnp node vcc size set tnp node vcc number nodes bounded number edges bounded thus number iterations main loop bounded first factor given claim second factor comes first case third factor comes second case fourth factor comes third case hence algorithm reaches fixpoint iterations main loop theorem stable program dataset fact algorithm decides time polynomial exponential krk proof partial correctness follows lemma termination follows lemma moreover number iterations main loop algorithm polynomially bounded hence kjk iteration bounded consequently lines algorithm require time exponential krk polynomial lemma moreover lines check line require time polynomial kjk hence finally argue check cycles line feasible time polynomial let graph obtained gjp negating weights path gjp corresponds path thus detecting whether node occurs gjp least one cycle reduces detecting whether node occurs negative cycle cycle negative sum weights solved polynomial time using example variant algorithm hougardy theorem stable program fact checking combined data complexity proof ime lower bound combined complexity ime lower bound data complexity inherited plain datalog dantsin ime upper bound data immediate theorem ime upper bound combined complexity note constants exponentially bounded kpk whereas krk krk hence theorem running algorithm gives decision procedure proposition checking stability program undecidable proof present reduction hilbert tenth problem determine whether given polynomial variables equation integer solutions polynomial assume without loss generality form let program containing following rule unary max predicate distinct unary ordinary numeric predicates note rule since variables occurs limit atom rule show stable integer solutions assume integer solutions grounding least one first two comparison atoms rule satisfied trivially stable since value propagation graph gjp contain edges assume substitution exists holds let following pseudointerpretations corresponding value propagation graphs clearly hvb however consequently program stable lemma rule applicable contains limit body atom variable occurs opt proof consider arbitrary rule applicable contains limit body atom occurring consider case max predicate variable occurs negative coefficient cases min predicate occurs positive coefficient analogous term form negative integer term containing moreover min predicate since applicable conjunction solution next show opt holds suffices argue conjunction solution let let grounding variable since moreover satisfies comparison atoms body min predicate also satisfies comparison atoms body hence solution following calculation implies claim lemma lemma rule limit body atom occurs dom applicable proof consider arbitrary stated lemma consider case max min predicate remaining cases analogous let body let moreover due solution solution next assume claim trivial holds definition opt opt well therefore lemma exist since min predicate opt rule opt moreover definition variable occurs negatively thus form ground product evaluating negative integer mention moreover opt exists grounding opt let substitution clearly satisfies object numeric atoms following opt furthermore already established implies following clearly imply required proposition program stable proof program condition definition follows lemma condition definition follows lemma proposition checking whether program accomplished pace proof let program check whether considering rule independently note first type consistency condition satisfied every rule numeric terms simplified much possible thus constants numeric terms simplified much possible violate first condition definition thus suffices check whether constants violate second third condition cases suffices consider one atom time limit head atom second condition comparison atom third condition consider one numeric term time third condition form terms constructed integers variables occurring limit atoms multiplication moreover consider variable occurring assumption occurs second condition definition need check limit body atom introducing different type head atom term grounded positive negative integers zero third condition need check limit body atom introducing min max term grounded positive negative integers dually case hence either case suffices check whether term evaluates positive integer negative integer zero next discuss checked logarithmic space let tki tji integer variable occurring limit atom assume without loss generality want check whether grounded positive integer case one following holds tji integers whose product positive product integers positive contains positive integer product integers positive contains negative integer total number variable occurrences even product integers negative contains negative integer total number variable occurrences odd product integers negative contains positive negative integers variable tji odd number occurrences conditions verified using constant number pointers binary variables clearly requires logarithmic space implies claim | 2 |
may necessity scheduling ori shmuel asaf cohen omer gurewitz department communication system engineering university negev email shmuelor department communication system engineering university negev email coasaf department communication system engineering university negev email gurewitz forward promising relaying scheme instead decoding single messages information relay decodes linear combinations simultaneously transmitted messages current literature includes several coding schemes results degrees freedom yet systems fixed number transmitters receivers unclear however behaves limit large number transmitters paper investigate performance regime specifically show number transmitters grows becomes degenerated sense relay prefers decode one strongest user instead linear combination transmitted codewords treating users noise moreover tends zero well makes scheduling necessary order maintain superior abilities provides indeed scheduling show linear combinations chosen decay even without state information transmitters without interference alignment ntroduction compute forward coding scheme enables receivers decode linear combinations transmitted messages exploiting broadcast nature wireless relay networks utilizes shared medium fact receiver received multiple transmissions simultaneously treat superposition signals decode linear combinations received messages specifically together use lattice coding obtained signal decoding considered linear combination transmitted messages due important characteristic lattice codes every linear combination codewords codeword however since wireless channel suffers fading received signals attenuated real integers attenuations factors hence received linear combination noisy receiver relay seeks set integer coefficients denoted vector close true channel coefficients problem elegantly associated diophantine approximation theory compared similar problem finding vector true channel one define different criteria goodness approximation example minimum distance vectors elements coefficients vector receiver transmitters addition vector must integer valued vector due fact represent coefficients integer linear combination codewords based theory one wishes find integer vector close terms real vector one must increase order small approximation error increase norm value leads significant penalty achievable rate receiver thus results tradeoff goodness approximation maximization rate scheme extended many directions mimo linear receivers integer forcing integration interference alignment scheduling mentioned works considered general setting number transmitters parameter system transmitters active times receiver able decode linear combination signals large number transmitters long transmitters comply achievable rates receiver still promise extent acceptable performance however work show number simultaneous transmitters great importance number relays fixed fact number considered solely parameter restriction since grows receiver prefer decode strongest user possible linear combinations make scheme degenerated sense relay chooses vector actually unit vector line identity matrix thus treating signals noise words linear combination chosen trivial furthermore show number transmitters grows scheme sumrate goes zero well thus one forced use users scheduling maintain superior abilities provide conclude paper optimistic view user scheduling improve gain believe done suitable matching linear combinations coding possibilities using simple round robin scheduling results fixed size systems lower bound thus show even simple scheduling policy system decay zero paper organized follows section system relay decodes linear combination original messages forward destination enough linear combinations destination able recover desired original messages sources main results following fig compute forward system model transmitters communicate shared medium relays model described section iii derive analytical expression probability choosing unit vector relay number users grows section depicts behaviour model section present advantage using scheduling along simple scheduling algorithm ystem model nown results consider network transmitters communicating single destination via relays model illustrated figure relays form layer transmitters destination transmitter communicate relays transmitter draws message equal probability prime size finite field fkp denotes finite field set elements message forwarded transmitter encoder fkp maps messages finite field codewords codeword subject power constraint kxl message rate transmitter defined length message measured bits normalized number channel uses log transmitter transmitter broadcasts codeword channel hence relay observes noisy linear combination transmitted signals channel hml hml real channel coefficients gaussian noise let hml denote vector channel coefficients relay assume relay knows channel vector receiving noisy linear combination relay selects scale coefficient integer coefficients vector aml attempts decode lattice point aml note messages different length allowed zero padding attain message result different rates transmitters theorem theorem awgn networks channel coefficient vectors coefficients vector following computation rate region achievable max log max log theorem theorem computation rate given theorem uniquely maximized choosing mmse coefficient htm khm results computation rate region htm kam khm note theorems real channels rate expressions complex channel twice theorems since relay decide linear combination decode coefficients vector optimal choice one maximizes achievable rate htm aopt arg max log khm remark coefficients vector coefficients vector plays significant role scheme dictates linear combination transmitted codewords relay wishes decode element signifies fact relay interested corresponding codeword starting certain number simultaneously transmitting users coefficients vector relay chooses always high probability unit vector means essentially treat users noise loose promised gain following lemma bounds search domain maximization problem lemma lemma given channel vector computation rate theorem zero coefficient vector satisfies kam khm problem finding optimal done exhaustive search small values however grows problem becomes prohibitively complex quickly fact becomes special case lattice reduction problem proved seen write maximization problem equivalent minimization problem aopt arg min khm htm regarded gram matrix certain lattice shortest basis vector one minimize problem also known shortest lattice vector problem slv known approximation algorithms due hardness notable lll algorithm exponential approximation factor grows size dimension however special lattices efficient algorithms exist polynomial complexity algorithm introduced special case finding best coefficient vector iii robability nit ector section examine coefficient vector single relay hence omit index expressions fig example magnitude elements different dimensions different values graphs depict single realization interpolated ease visualization matrix examining matrix one notice number transmitters grows diagonal elements grow fast relatively elements specifically diagonal element random variable minus multiplication two gaussian whereas elements multiplication two gaussian course grows former much higher expectation value compared later examples presented figure different dimensions clear even moderate number transmitters differences values diagonal elements significant consider quadric form wish minimize choice unit vector add one element diagonal large elements little effect function value compared diagonal elements therefore intuitively one would prefer little possible elements diagonal although elements reduce function value happen choose unit vector reminder section make argument formal minimization quadratic form note right term consists possible pairs total elements wish understand relay prefer unit vector vector specifically since function random channel compute probability unit vector minimizer given alternatively probability certain nontrivial minimize compared unit vector thus wish find probability unit vector size entry zero elsewhere integer valued vector unit vector note refers integer vector including vectors search domain kak lemma note also right left hand sides inequality equation dependent hence direct computation probability trivial still probability evaluated exactly noting angle mainly affects details theorem minimization function written optimality certain vector theorem scheme probability nontrivial vector coefficient vector aopt maximize achievable rate aopt minimize aopt comparing unit vector upper bounded cdf beta distribution eters kak note unit vector context work main consequence theorem following corollary number simultaneously transmitting users grows probability maximizer achievable rate goes zero specifically proofs given following discussion discussion simulation results corollary clarifies every number users grows probability vector maximizer achievable rate going note assumption arises naturally form paper regime along fact kak grantees positive figure depicts probability upper bound given equation simulation results analytic results well simulations rate decay one deduce even relatively small values simultaneously transmitting users relay prefer choose unit vector also one observe results analytic bound norm grows rate decay increases faster decay reflects increased penalty approximating real vector using integer valued vector proofs proof theorem based lemma lemma distribution kak squared cosine angle integer vector standard normal vector dimension beta proof let orthogonal rotation matrix basis vector kak define note standard normal vector since qiqt qqt kak khk considering equality cos expression represented independent ratio beta distribution note correspond degrees freedom simulation beta dist bound min dist bound integer vector unit vector log kak fig upper bounds given solid lines dashed lines simulation results dotted lines unit vector minimizer compared various values function simultaneously transmitting users proof theorem according equation follows since removed negative terms follows lemma kak bound probability given theorem consists complicated analytic function hence corollary includes simplified bound avoids use yet keeps nature result theorem proof corollary based following lemma lemma cdf kak lower bounded cdf minimum uniform random variables proof start assuming even case odd dealt later distribution true since larger yield lower probability due observation khk represented independent exponential note essentially sum independent pairs ratio distributed minimum uniform random variables lemma since ratio interpreted proportion waiting time first arrival arrival poisson process case odd increase term proof replacing resulting distribution similar minimum uniform random variables manner proof corollary kak follows lemmas respectively log kak following lemma shows simple property optimal coefficients vector shows relay interested one transmitter unit vector optimal coefficients vector transmitter strongest channel lemma channel vector size arg maxi optimal coefficients vector aopt maximize rate satisfy arg maxi well proof suppose exist optimal coefficients vector aopt satisfies considering rate expression show rearranging aopt vector identical higher rate attain let aopt except two first entries switched aopt aopt values vectors thus kaopt term affecting rate scalar multiplication aopt first note signs corresponding optimal coefficient aopt equal different case exist sign sign aopt sign sign aopt could possible due fact optimal coefficients vector maximize scalar multiplication therefore considering property aopt aopt means contradicting aopt rate improved choosing optimality specifically get long maximal value place maximal value always improve rate optimality possible vectors corollary refers probability unit vector minimize fixed next wish explore probability possible purpose clarity gives upper bound probability unit vector minimize compered certain possible integer coefficients vectors certain probability optimal vector unit vector union probabilities vector satisfies let define probability relay picked unit vector coefficient vector probability vector chosen polynomial time algorithm finding optimal coefficients vector given complexity result derives fact cardinality set vectors denoted considered upper bounded vector exist set zero probability one maximize rate shell note set thus wish compute note cardinality grows dimension easily upper bounded follows theorem scheme probability coefficients vector chosen maximize achievable rate aopt compared unit vector number simultaneously transmitting users grows zero lim proof lim lim lim lim set points average consecutive points mapped different coefficients vector lim sum rate lim khk lim lim lim lim trans users fig sum rate give case relays function number simultaneously transmitting users different values lim true since term inside sum maximized due multiplied divide eliminate limit term multiplied since goes zero follows strong law large numbers normalized sum converge probability one expected value one lastly define log result implies probability non unit vector rate maximizer decreasing exponentially zero number users grows ompute orward ate order relay able decode linear combination coefficients vector messages rates involved linear combination must within computation rate region messages corresponding entry coefficient vector non zero min aml hence sum rate system defined sum messages rates min aml following results previous subsections would like show number users grows system decreases zero well without scheduling users individual rate negligible true well strengthen necessity schedule users theorem grows sum rate tends zero lim min aml proof proof outline follows sum rate expression divided two parts describe two scenarios first case relay chooses unit vector coefficients vector second case vector chosen probabilities respectively show part goes zero upper bounding corresponding expressions complete proof given appendix simulations different values found figure obvious large sumrate decreases hence fixed number relays use scheduling large number users degenerates choosing unit vectors treating users noise however simulations suggests peak small number transmitters explore next section cheduling ompute orward theorem suggest restriction number simultaneously transmitting users made order apply scheme systems large number sources scheduling smaller number users take place simple scheduling scheme schedule users round robin manner transmission users may transmit simultaneously value optimized yet thump rule one schedule users similar number relays transmission obtain going zero figure depicts scenario fact even higher sum rates obtained number scheduled users higher number relays number maximal figure achieved still clearly seen zero scheduled sources compared zero users transmit relay use fact one use existing results case equal number sources relays describe transmission schedule sum rate sum rate scheduling scheduling upper best lower trans users power dbd fig simulation results average per transmission number relays scheduling performed round robin manner every phase sources scheduled among transmitting users fig simulation results average compared upper lower bounds given respectively function transmission power according sum rate sources relays upper bounded linear combination transmitted signals thus becomes degenerate preferable apply scheduling much smaller group size show even simple scheduling policy goes zero would like future work proceed explore scheduling policies exploit decoding procedure min aml log log log coarse lower bound attained relays forced choose coefficients vectors relay chooses interference channel relay considers interferences sources noise even one min aml proof probabilities define partition channel vectors relays sees specifically define arg min min aml khm khm arg min zero simulation results bounds optimal coefficient vectors presented figure aforesaid one conclude scheduling users transmission worthwhile respect alternative permitting users transmit simultaneously course scheduling policy great impact performance increased example one schedules groups whose channel vectors suitable probability relay sees channel vector probability relay sees channel vector note channel vectors belongs respectively definitions sum rate written follows min aml onclusion future work work gives evidence necessity user scheduling scheme large number simultaneously transmitting users proved probability goes one regime optimal choice decoding relays decode user best channel instead log ppendix roof heorem min hem eml min hem aml treat two terms separately second term represents sum rate case optimal coefficients vectors may integer vector excluding unit vector first term case optimal coefficients vector show terms goes zero starting second term returning expectation khm aml min aml hem lim using markov jensen inequalities log khm khem means words possible squared norm values belong vectors define henorm probability belong henorm henorm satisfies due fact since may happen two vectors would squared norm value applying expectation upper bound khem due bound log following directly theorem log considering grows second term going zero lim therefore interested analyzing expectation squared norm values belonging channel vectors remember without constraints channel vector gaussian random vector squared norm follows distribution shell note single squared norm value belong several different gaussian random vectors hence define henorm set squared norm values belongs formally henorm max hem max khem log kam kam khem hem max khem max khem define log khm would like show henorm min hem min hem aml lim thus left first term lim min eil min eil khm lim min log eil khm khm lim khm khm log lim khm set unit vector rate expression upper bound best case scenario relay different unit vector finally clear grows realization argument log going eferences nazer gastpar harnessing interference structured codes ieee transactions information theory vol niesen whiting degrees freedom ieee transactions information theory vol zhan nazer gastpar erez mimo ieee international symposium information theory ieee zhan nazer erez gastpar linear receivers ieee transactions information theory vol sakzad harshan viterbo mimo linear receivers based lattice reduction wireless communications ieee transactions vol feng ionita nazer collision scheduling cellular networks information theory isit ieee international symposium ieee wei chen network coding design channels ieee transactions wireless communications vol hong caire strategies cooperative distributed antenna systems information theory ieee transactions vol sahraei gastpar finding best equation communication control computing allerton annual allerton conference ieee dadush peikert vempala enumerative lattice algorithms norm via coverings foundations computer science focs ieee annual symposium ieee alekhnovich khot kindler vishnoi hardness approximating closest vector problem annual ieee symposium foundations computer science focs ieee lenstra lenstra factoring polynomials rational coefficients mathematische annalen vol gama nguyen finding short lattice vectors within mordell inequality proceedings fortieth annual acm symposium theory computing acm conway sloane sphere packings lattices groups springer science business media vol jagannathan borst whiting modiano efficient scheduling systems modeling optimization mobile hoc wireless networks international symposium ieee | 7 |
image fast upscaling technique longguang wang zaiping lin xinpu deng wei image misr aims fuse information image sequence compose one applied extensively many areas recently different single image sisr transitions multiple frames introduce additional information attaching significance fusion operator alleviate misr approaches inevitable projection reconstruction errors space space commonly tackled interpolation operator however crude interpolation may fit natural image generate annoying blurring artifacts especially fusion operator paper propose fast upscaling technique replace interpolation operator design upscaling filters space periodic respectively shuffle filter results derive final reconstruction errors space proposed fast upscaling technique reduce computational complexity upscaling operation utilizing shuffling operation avoid complex operation space also realize superior performance fewer blurring artifacts extensive experimental results demonstrate effectiveness efficiency proposed technique whilst combining proposed technique bilateral total variation btv regularization misr approach outperforms methods index upscaling technique bilateral total variation shuffling operation introduction limited technical manufacturing level resolution image may satisfied video surveillance medical imaging aerospace many fields high resolution images commonly required desired distinct image details device ccd cmos image sensors developing rapidly recent decades increasing demand image resolution still satisfied leading attempts steer clear sensor issues utilize computational imaging improve spatial resolution namely serving typical inverse problem aims recover missing image details image degradation process underdetermined requiring additional information alleviate single image sisr lack observation information leads attempts exploit additional information learn natural images many approaches proposed image misr transitions multiple observations provide wang college electronic science engineering national university defense technology changsha china lin deng also college electronic science engineering national university defense technology changsha china ing available information therefore approach mainly concentrated derive high resolution image maintaining global consistency intuitive natural concerning misr approaches extensive works put forward focusing design regularization realize favorable results tikhonov regularization method representative method introduces smoothness constraints suppress noise results loss detailed information edge regions realize edge preserving total variation operator introduced regularization term however leads deterioration smoothness local flat region motivated bilateral filter farsiu proposed bilateral total variation btv operator measured norm integrates bilateral filter realizes superior performance robustness due performance simplicity btv improvement attracted extensive investigation proposed locally adaptive bilateral total variation labtv operator measured neighborhood homogeneous measurement realizing locally adaptive regularization among approaches maintain global consistency multiple observations reconstruction errors commonly integrated cost function penalize discrepancy reconstructed image observations within iterative process inevitable projection reconstruction error space space usually tackled interpolation operator simplicity however crude operation may introduce additional errors lead deteriorated convergence performance especially fusion operation misr paper propose fast upscaling technique replace interpolation operation framework firstly unfold degradation model analyze underlying contributions periodic reconstruction error space design upscaling filters correspondingly secondly filter results utilizing designed upscaling filters shuffled derive reconstruction errors space finally reconstruction errors cooperated regularization term modify image iteratively convergence extensive experiments conducted demonstrate effectiveness efficiency proposed upscaling technique besides combining proposed technique btv regularization misr approach realizes performance rest paper organized follows section mainly formulates problem image section iii presents proposed upscaling technique detail section performs extensive experiments compared approaches conclusions drawn section original image geometric wrap fig blurring downsample add noise sketch degradation model image problem formulation degradation model inverse process image degradation reconstruction tightly dependent degradation model many degrading factors existing like atmospheric turbulence optical blurring relative motion sampling process degradation model images formulated represent image image respectively serve decimation matrix blurring operator geometric warp matrix respectively additional gaussian noise note although complex motions may common real sequences represented simple parametric form many works tend address problem global translational displacements multiple frames serving fundamental issue still focus paper generally assuming images generated condition derive following model represent decimation matrix blurring operator respectively images degradation model illustrated fig process bayesian framework reconstruction equivalent estimation image given images maximum posteriori map estimator extensively utilized solve probabilistic maximization problem equivalent minimization reconstruction errors derived insufficient information given image sequence reconstructing original image underdetermined problem solve problem regularization commonly introduced priori knowledge obtain stable solution rewritten regularization term image serves regularization parameter weighting reconstruction errors regularization cost assuming decimation matrix blurring operator geometric warp matrix already known minimization problem solved utilizing steepest descent approach estimators image iteration respectively learning rate representing pace approach optimal iterations paper derivation displacements blur kernel consideration assume blur kernel already known utilize optical flow method estimate underlying displacements iii image fast upscaling technique section first present proposed fast upscaling technique introduce motivation formulation detail theoretical analysis computational complexity convergence integrate proposed upscaling technique btv regularization construct overall misr framework upscaling technique motivation see projecting reconstruction errors space space required inference interpolation operator commonly plays main role upscaling operator deblurring operator inverse translation operator performed space lacking theoretical basis crude interpolation may introduce additional errors leading blurring artifacts therefore serves fundamental operator requires small stepsize enough iterations alleviate deterioration adds computational complexity shi efficient convolutional neural network proposed array upscaling filters utilized corporate shuffling operator upscale final feature maps output located end network demonstrates increasing resolution image image enhancement increase computational complexity besides commonly used interpolation methods bring additional information solve reconstruction problem inspired unfold degradation model analyze underlying contributions periodic reconstruction error space design similar array upscaling filters paper propose upscaling technique perform fast efficient upscaling operation serving direct bridge reconstruction errors space space formulation analyze degradation model shown perspective image assuming blurring operator limited region odd symmetry blurring kernel space upscaling factor determined original decimal geometric wrap translation blurring blurring downsample decimation fig degradation process respect different namely decimation operator limited region space concerning translational displacements images displacements considered displacements bring additional information without loss generality positive displacements taken consideration geometric wrap operator limited region illustrate interaction concatenation operators set derive degradation model perspective image shown fig degradation process unfolded shown fig seen different ranges influence space correspond different inspires structure upscaling filters concerning periodic utilizing differences influence ranges parameters including displacements blurring kernel upscaling factor determined overall degradation process underlying contributions periodic space derived remembering projection reconstruction errors space space structure upscaling filters utilizing underlying contributions periodic realize upscaling reconstruction errors space within probabilistic framework upscaling operator equivalent optimal estimation problem reconstruction error pixel space pixel space respectively serves influence range space pixel space number pixels space assuming serves contribution pixel space pixel space minimization problem equivalent derived utilizing greedy strategy number pixels within influence ranges different limited region case fig utilize norm simplicity solution computed considering influence range corresponding contributions dependent namely pixels share identical influence range contribution distribution intuitively separate upscaling operator respect different rewrite identical convolution form due global consistent process represents reconstruction error map space represents reconstruction error map space namely serves contribution distribution concerning regarding norm normalization constant integrate derive normalized contribution distribution filter masks way upscaling operator implemented convolution operator realizes favorable efficiency reconstruction errors respect ranged space derived separately shuffling operator introduced rearrange elements separate error maps complete error map space shown fig utilizing proposed upscaling technique evade interpolation operator may introduce additional errors design filter masks according contribution distribution concerning ranged process error map space separately finally shuffling operator implemented derive final error map space processes shuffling reconstruction error map reconstruction error map reconstruction error map fig upscaling technique space namely processing results mapped directly corresponding space without intermediate operations upscaling technique realize superior efficiency effectiveness demonstrated following analysis section theoretical analysis section theoretical analysis respect computational complexity convergence carried respectively attempt illustrate superiority proposed upscaling technique theoretically computational complexity conventional upscaling technique reconstruction errors space commonly projected space interpolation operator first processed deblurring operator inverse translation operator way deblurring operator inverse translation operator performed space adds computational complexity although complexities upscaling technique proposed upscaling technique order number pixels space computation amounts differ greatly assuming limited region space respectively upscaling technique bicubic interpolation commonly utilized performing weighted sum neighboring pixels space afterwards deblurring operator inverse translation operator performed weighted sum neighboring pixels respectively space proposed upscaling technique upscaling operator performed weighted sum neighboring pixels space remarkably scent fixed stepsize commonly utilized upscaling technique requires small stepsize enough iterations approach optimal upscaling technique introduces additional errors deviation descent direction makes convergence process greatly time consuming methods typically tend converge fewer iterations computation hessian matrix iteration required leading expensive computational cost analyze upscaling technique theoretically regarded variation simplification method realize superior convergence remembering minimization problem unfold degradation model rewritten represent vectorized image image respectively serves dictionary arranged lexicographic order consists atoms analyzed different correspond different influence ranges contribution distributions utilize characteristic construct overcomplete dictionary shown fig newton method inference written dictionary hard manipulate newton method directly utilized general considering commonly operator computation second derivation relatively difficult besides regularization parameter usually small simplify separate term reduces computational complexity especially upscaling factor unfold symmetric matrix derive convergence misr approaches steepest see fig atom highly sparse inverse operation hard manipulate regional namely equals zero except take diagonal elements consideration namely represents neighborhood corresponding regard diagonal matrix ignoring pixel space taking consideration entries rewrite elements rearranged placing relative atoms closer derive mate diagonal matrix namely entries rewrite way equal zeros except diagonal ones ones derive dictionary akt fig procedure dictionary push atom backwards corresponding image rewritten convolution form represents contribution distribution map corresponding atom see performs identical concerning reconstruction error space illustrating proposed upscaling technique performs variation simplification newton method realize superior convergence atom performs fication constant less considering commonly small magnification effect regularization term ignored derive technique technique mse interpolation strategy however descent direction utilizing upscaling technique relatively accurate therefore convergence faster stable demonstrate superior convergence utilize tikhonov regularization without loss generality compare convergence process upscaling technique comparison convergence process shown fig btv regularization error map error map dhfk considering upscaling technique performs approximate newton method simplifications may introduce additional errors therefore also applied similar learning rate stable convergence upscaling technique btv regularization construct overall misr framework due performance simplicity btv become one commonly applied regularization process therefore utilize btv misr framework corporate proposed upscaling technique overall framework illustrated fig summarized algorithm fig overall misr framework algorithm misr utilizing upscaling technique input images blurring kernel upscaling factor initialize select target image example utilize bicubic interpolation derive initial image estimate translational displacements target image images loop compute error map space respectively perform upscaling technique derive error map space compute btv regularization tion update derive according output reconstructed image experimental results iteration fig comparison convergence process see form fig technique converges within around iterations technique requires iterations converge demonstrating superior convergence upscaling technique misr framework section integrate proposed section extensive experimental results presented demonstrate effectiveness efficiency proposed upscaling technique first perform experiments demonstrate effectiveness upscaling technique equipping various misr methods proposed misr framework compared algorithms described degradation model degraded images generated image parallel translations blurring downsampling addition noises experiments translational displacements randomly set vertical horizontal shifts randomly sampled uniform distribution blurring tikhonov btv labtv fig comparison methods utilizing technique baselines image baby tikhonov btv labtv fig comparison methods utilizing technique baselines image butterfly operator realized utilizing gaussian kernel standard deviation geometric wrapping blurring operation images downsampled factor finally gaussian noise standard deviation added scenario use images reconstruct image misr approaches select first one target image without loss generality suppose blurring kernel already given human vision sensitive brightness changes methods mented brightness channel color channels upscaled bicubic interpolation color images experiments coded matlab running workstation septuple core ghz cpus memory quantitative analysis comparison reconstruction performance ratio psnr mean structure similarity ssim utilized metrics defined senting dynamics pixel value generally set respectively mean value image respectively standard variance respectively two stabilizing constants evaluation proposed upscaling technique original btv validate effectiveness efficiency pro bicubic kim yang labtv miscsr proposed fig reconstruction results image bridge ranged methods posed upscaling technique first select four representative misr method consisting tikhonov btv labtv method baseline methods apply technique replace technique pipelines note example conduct experiments dataset compare performance correspondingly visual comparison exhibited fig quantitative results presented table see fig compared corresponding baseline methods methods equipped technique generate sharper edges fine details effectively alleviate blurring effects fewer artifacts realize superior visual quality demonstrates effectiveness technique quantitative results shown table see proposed technique remarkably improves reconstruction performance respect psnr ssim images whilst accelerating misr process psnr values improved around average ssim values also increased around concerning computational complexity running time equipped methods shortened around average practicability greatly enhanced table comparison psnr ssim runnding time mean perfromance experiments presented performance improvement equipped proposed technique shown brackets red bold metric tikhonov btv labtv psnr baby ssim time psnr bird ssim time psnr butterfly ssim time psnr head ssim time psnr woman ssim time psnr average ssim time comparison methods demonstrate effectiveness efficiency proposed misr framework seven methods selected compare work bicubic interpolation serves simplest approach selected baseline method serving cited method field misr performance simplicity farsiu btv method selected besides variation labtv method popularity approaches increases also select kato sparse coding method misr denote miscsr one method field addition kim yang methods considered sisr methods also introduced comparison fair comparisons source codes kim yang methods released authors homepages original btv directly implemented experiments available bicubic kim yang labtv miscsr proposed fig reconstruction results image commic ranged methods original btv bicubic kim labtv miscsr yang proposed fig reconstruction results image foreman ranged methods codes methods implement according instructions performance may differ original note kato miscsr method utilized comparison upscaling factor instructions presented implementation details parameter settings condition extensive experiments conducted dataset reconstruction results exhibited figs quantitative results presented table iii scenario bicubic method sisr methods target image first image utilized reconstruction misr methods registration procedure adopted note blurring kernel posed given approaches derivation focus paper detailed parameter settings misr framework summarized table table parameter settings proposed upscaling technique parameters values tolerance threshold maximum iteration regularization parameter step size perspective visual quality sisr methods kim method serving superior approach already recovered major structures scene however original btv tends oversmooth fine details misr methods bicubic kim yang labtv miscsr proposed fig reconstruction results image girl ranged methods blurring artifacts btv labtv methods commonly noticeable especially within edge texture regions although sparse representation alleviates blurring effect miscsr ragged edges still visible comparison proposed misr approach produces sharper clearer images fine details fewer artifacts quantitative results exhibited table iii see approach outperforms methods images respect reconstruction performance efficiency compared kim method serving superior sisr method psnr value approach improved average processing efficiency times faster compared miscsr method known misr method approach improves psnr value runs nearly times faster comparing approach btv method superiority proposed upscaling technique extensively validated higher psnr values shorter running time leave bicubic method see approach performs effective efficient one among comparing methods practical applications table iii comparison psnr running time range methods mimr methods mean perfromance experiments presented standard derivation shown brackets best results shown red bold baboon barbara bridge coastguard comic face flowers foreman lenna man monarch pepper ppt zebra average bicubic psnr time kim psnr time psnr yang time btv psnr demonstrate effectiveness proposed misr approach experiments conducted dataset concerning ranged upscaling factors noise intensities results presented table table shown table upscaling factor increased effects multiple observations gradually erased due growing leading performances misr approaches deteriorating severely even inferior sisr approaches several conditions although approach also undergoes deterioration time labtv psnr time miscsr psnr time proposed psnr time still performs superiorly highest psnr values conditions table see approach performs strong robustness tolerance noises methods sensitive noises deteriorate noise intensity increased proposed approach still outperforms average even condition noise intensity proposed approach performs compared kim method labtv method respectively original btv bicubic kim yang labtv miscsr proposed fig reconstruction results image lenna ranged methods table magnification performance terms psnr dataset mimr methods mean perfromance experiments presented standard derivation shown brackets best results shown red bold baby bird butterfly head woman average baby bird butterfly head woman average baby bird butterfly head woman average upscaling factor bicubic kim yang btv labtv miscsr proposed table noise intensity performance terms psnr dataset mimr methods mean perfromance experiments presented standard derivation shown brackets best results shown red bold baby bird butterfly head woman average baby bird butterfly head woman average baby bird butterfly head woman average noise intensity bicubic kim yang btv labtv miscsr proposed conclusions paper propose fast upscaling technique replace interpolation operator misr approaches unfold degradation model perspective image find influence ranges underlying contributions periodic vary periodically inspires design upscaling filters periodic respectively utilize shuffling operator realize effective fusion operation equipped upscaling technique remarkable improvements realized respect reconstruction performance efficiency methods besides cooperation technique btv regularization outperforms methods demonstrated extensive experiments references chandran fookes lin sridharan investigation optical flow surveillance applications aprs workshop digital image computing vol zhang zhang shen reconstruction algorithm surveillance images signal processing vol greenspan kiryati peled mri proc ieee isbi shi caballero ledig zhuang bai bhatia marvao dawes oregan rueckert cardiac image global correspondence using patchmatch proc int conf medical image computing computer assisted intervention miccai trinh luong dibos rocchisani pham nguyen novel method denoising medical images ieee trans image vol apr tatem lewis atkinson nixon target identification remotely sensed images using hopfield neural network ieee trans geosci rem vol apr thornton atkinson holland mapping rural land cover objects fine spatial resolution satellite sensor imagery using international journal remote sensing vol makantasis karantzalos doulamis doulamis deep supervised learning hyperspectral data classification convolutional neural networks proc ieee igarss jul tatem lewis atkinson nixon land cover pattern prediction using hopfield neural network remote sens vol goto fukuoka nagashima hirano sakurai system proc int conf pattern recognition zhang gao tao dictionary single image proc cvpr providence jun yang yang fast direct simple functions proc iccv timofte gool anchored neighborhood regression fast proc iccv dong loy tang learning deep convolutional network image proc eccv yang wang zhang wang neighbor embedding image super resolution sparse tensor ieee trans image vol jul convolutional sparse coding image proc iccv nguyen milanfar golub computationally efficient superresolution image reconstruction algorithm ieee trans image vol zhang lam wong application tikhonov regularization reconstruction brain mri image lecture notes computer science vol shen lam zhang total variation regularization based reconstruction algorithm digital video eurasip adv signal process vol babacan molina katsaggelos parameter estimation image restoration using variational distribution approximation ieee trans image vol apr yuan zhang shen multiframe employing spatially weighted total variation model ieee syst video vol shen zhang huang map approach joint motion estimation segmentation super resolution ieee trans image vol molina mateos katsaggelos vega bayesian multichannel image restoration using compound random fields ieee trans image vol humblot superresolution using hidden markov model bayesian detection estimation framework eurasip appl signal vol article farsiu robinson elad milanfar fast robust superresolution ieee trans image vol purkait chanda super resolution image reconstruction bregman iteration using morphologic regularization ieee trans image vol shi caballero huszar single image video using efficient convolutional neural network proc ieee cvpr protter elad takeda milanfar generalizing reconstruction ieee trans image vol mar takeda milanfar protter elad superresolution without explicit subpixel motion estimation ieee trans image vol liu sun bayesian adaptive video super resolution ieee trans image vol gao ning image method signal processing vol kato hino murata image super resolution based sparse coding neural networks vol yang wright huang image sparse representation raw image patches ieee trans image vol kim kwon using sparse regression natural image prior ieee trans pattern anal mach vol | 1 |
jan graph modules commutative rings habibollah shokoufeh habibi abstract let module commutative ring paper continue study graph introduced zariski modules commutative rings comm undirected graph nonzero submodule vertex exists nonzero proper submodule product defined two distinct vertices adjacent prove tree either star graph path order latter case simple module module unique submodule moreover prove cyclic module least three minimal prime submodules every cyclic module introduction throughout paper commutative ring identity unital resp mean submodule resp proper submodule define simply denote annr simply ann said faithful ann let product denoted defined see many papers assigning graphs rings modules see example graph introduced studied graph whose vertices ideals nonzero annihilators two vertices adjacent later modified studied many authors see generalized idea submodules defined undirected graph called graph vertices exists graph distinct vertices adjacent let subgraph vertices ann exists submodule ann note vertex exists date april mathematics subject classification primary secondary key words phrases graph cyclic module minimal prime submodule chromatic clique number habibollah shokoufeh habibi nonzero proper submodule ann every nonzero submodule vertex work continue studying generalize results related graph obtained graph prime submodule submodule whenever prime radical radm simply rad defined intersection prime submodules containing case contained prime submodule radm defined notations denote set set nilpotent elements set minimal prime submodules respectively also simply set zero divisors set clique graph complete subgraph supremum sizes cliques denoted called clique number let denote chromatic number graph minimal number colors needed color vertices two adjacent vertices color obviously section paper prove tree either star graph path case simple module module unique submodule see theorem next study bipartite graphs modules artinian rings see theorem moreover give relations existence cycles graph cyclic module number minimal prime submodules see theorem corollary let introduce graphical notions denotations used follows graph ordered triple consisting nonempty set vertices set edges incident function associates unordered pair distinct vertices edge edge joins say adjacent path graph finite sequence vertices adjacent denote existing edge graph subgraph restriction bipartite graph graph whose vertices divided two disjoint sets every edge connects vertex one independent sets complete bipartite graph vertices denoted size respectively connects every vertex vertices note graph called star graph vertex singleton partition called center graph denote set vertices adjacent least one vertex every vertex size denoted vertices degree called simply regular independent set subset vertices graph vertices adjacent denote path cycle order respectively let two graphs graph homomorphism mapping every edge edge retract subgraph exists homomorphism graph modules every vertex homomorphism called retract graph homomorphism see graph ideal said nil consist nilpotent elements said nilpotent natural number proposition suppose idempotent element following statements every submodule submodule submodules prime submodules prime submodules respectively proof clear need following lemmas lemma see proposition let ideals following statements equivalent abelian group direct sum exist pairwise orthogonal central idempotents rei lemma see theorem let nil ideal idempotent exists idempotent lemma see lemma let minimal submodule let ann nil ideal idempotent proposition let artinian ring let finitely generated module every nonzero proper submodule vertex proof let submodule exists maximal submodule hence since artinian ring minimal prime ideal containing ann thus ass follows therefore desired lemma let idempotent element graph one following statements holds prime habibollah shokoufeh habibi one prime module one module unique submodule moreover cycle either simple module module unique submodule prime module proof none prime module exist annri form triangle contradiction thus without loss generality one assume prime module prove one vertex contrary suppose edge therefore form triangle contradiction vertex prime module part occurs exactly one vertex theorem proposition obtain part suppose cycle none simple module choose submodules form cycle contradiction converse trivial theorem tree either star graph moreover simple module module unique submodule proof vertex exists one vertex ann since empty subgraph hence star graph therefore may assume vertex suppose star graph least four vertices obviously two adjacent vertices let since tree theorem diam every edge form consider following claims claim either pick since tree vertex knu contradiction contradiction respectively claim proved without loss generality suppose clearly claim claim show minimal submodule see first show every assume contradiction thus induced subgraph contradiction implies minimal submodule obtain induced subgraph contradiction thus desired graph modules claim every every let since vertex either contradiction hence claim proved claim complete claim showing exactly two minimal submodules let submodule properly contained since either claim contradiction hence minimal submodule suppose another minimal submodule since minimal submodules deduce contradiction claim proved claims minimal submodule lemma idempotent lemma deduce either star graph conversely assume exactly four vertices thus vertices theorem let artinian ring bipartite graph either star graph moreover simple module module unique submodule proof first suppose local ring hence theorem artinian local ring lemma proposition since bipartite graph hence prime module easy see vector space semisimple hence lemma theorem deduce either isomorphic assume artinian local ring let unique maximal ideal natural number clearly adjacent every vertex star graph proposition assume ann nil ideal finite bipartite graph either star graph regular graph finite degree complete graph proof vertex one vertex ann since empty subgraph star graph thus may assume vertex hence theorem prime module therefore theorem follows artinian ring local ring exists natural number clearly adjacent every vertex star graph otherwise theorem lemma exist pairwise orthogonal central idempotents modulo ann lemma easy see habibollah shokoufeh habibi idempotent element lemma implies star graph vertex since regular graph complete graph hence may assume vertex prime module hence ann easy see set infinite infinite degree contradiction thus finite length since finite length artinian ring proof part one submodule deg deg contradicts regularity hence simple module similarly simple module suppose artinian local ring seen part exists natural number adjacent vertices deduce complete graph let multiplicatively closed subset subset said every subset said saturated following condition satisfied whenever need following result due theorem see theorem let cyclic module let subset relative multiplicatively closed subset submodule maximal saturated ideal maximal prime theorem cyclic module ann nil ideal contains cycle proof tree theorem either star graph simple module unique submodule latter case impossible suppose star graph center star clearly one assume minimal submodule lemma exists idempotent proposition lemma conclude contradiction hence thus one may assume suppose two distinct minimal prime submodules since ann hence choose set empty maximal element say hence theorem prime submodule since contradiction therefore exist positive integer consider submodules clear contradiction thus form triangle contradiction hence contains cycle graph modules theorem suppose cyclic module radm ann nil ideal either contains cycle proof similar argument proof theorem shows either contains cycle simple module module unique submodule latter case implies note radf simple module prime module recall said semiprime submodule every ideal every submodule implies called semiprime module semiprime submodule every intersection prime submodules semiprime submodule see theorem let maltiplicatively closed subset containing zerodivisors finitely generated module moreover retract semiprime module particular whenever semiprime module proof consider vertex map clearly implies thus surjective hence follows assume semiprime module show without loss generality assume vertex contrary suppose contradiction shows map graph homomorphism vertex choice fixed vertex retract graph homomorphism clearly implies assumption corollary finitely generated semiprime module since chromatic number graph least positive integer exists retract homomorphism following corollaries follow directly proof theorem corollary let maltiplicatively closed subset containing finitely generated module moreover semiprime module corollary finitely generated semiprime module eben matlis proposition proved finite set distinct minimal prime ideals rpn result generalized finitely generated multiplication modules theorem use generalization cyclic module theorem see theorem let finite set distinct minimal prime submodules finitely generated multiplication module mpn habibollah shokoufeh habibi theorem let cyclic module finite set distinct minimal prime submodules exists clique size proof let cyclic module since multiplication module theorem exists isomorphism mpn let position consider principal submodules module lemma proposition product submodules rpi rpj zero since isomorphism exists tij tij every let tij show tnn clique size every rtni rtnj rtnj rtni rtnj tri tri rtnj since tni deduce tni distinct submodules corollary every cyclic module theorem let cyclic module radm proof corollary nothing prove thus suppose positive integer let theorem mpn clearly show corollary rpi prime submodule since radm every mpi simple rpi define map min since mpi simple module proper vertex coloring thus since radm easy see theorem corollary obtain desired theorem every module proof first assertion use technique theorem let contrary assume bipartite contains odd cycle suppose shortest odd cycle natural number clearly since shortest odd cycle vertex consider vertices implies odd cycle contradiction thus contradiction hence easy check form triangle contradiction converse clear radical defined intersection prime ideals containing next theorem recall finitely denoted stating generated module rad see proposition also know finitely generated module graph modules every prime ideal ann exists prime submodule see theorem theorem assume finitely generated module ann nil ideal graph star graph proof suppose first unique minimal prime submodule since vertex hence exist elements easy see vertices since minimal submodule without loss generality assume minimal submodule minimal submodule exists claim unique minimal submodule contrary suppose another minimal submodule either lemma idempotent element hence implies contradiction contradiction unique minimal submodule let prove bipartite graph parts may assume independent set claim one end every edge adjacent another end contains prove suppose edge since minimality either latter case follows hence plain proved gives independent set since every vertex contains vertices adjacent theorem since one end every edge adjacent another end contains also deduce every vertex contains every vertex contains note one end edge contained since minimal submodule star graph center suppose claim since suffices show see let prove clearly rrm done thus rrm rsm theorem note ann rad therefore unit contradiction required since star graph center remains show suppose consider vertex since every vertex contains yields pick since one find element minimal submodule since unique minimal submodule thus contradiction done hence star graph whose center desired habibollah shokoufeh habibi corollary assume finitely generated module ann nil ideal bipartite graph star graph references aalipour akbari nikandish nikmehr shaveisi coloring graph commutative ring discrete mathematics minimal prime ideals cycles graphs rocky mountain math vol aalipour akbari behboodi nikandish nikmehr shaveisi classication graphs commutative rings algebra colloquium anderson livingston graph commutative ring algebra springer anderson fuller rings categories modules new farshadifar product dual product submodules far east math sci habibi zariski modules commutative rings comm algebra graph modules commutative rings arxiv submitted atiyah macdonald introduction commutative algebra beck coloring commutative rings behboodi rakeei graph commutative rings algebra appl vol lam first course rings springer verlag new york prime submodules modules comment math univ pauli submodules modules mathematica japonica spectra modules comm algebra unions prime submodules houston journal modules noetherian spectrum comm algebra matlis minimal prime spectrum reduced ring illinois math reinard graph theory grad texts math springer samei reduced multiplication modules math sci tavallaee varmazyar submodules modules iust international journal engineering science department pure mathematics faculty mathematical sciences university guilan box rasht iran ansari department pure mathematics faculty mathematical sciences university guilan box rasht iran | 0 |
feb inference additively separable models set conditioning variables damian kozbur university department economics email abstract paper studies nonparametric series estimation inference effect single variable interest outcome presence potentially conditioning variables context additively separable model model highdimensional sense series approximating functions terms sample size thereby allowing potentially many measured characteristics model required approximately sparse approximated using small subset series terms whose identities unknown paper proposes estimation inference method called double selection generalization selection standard rates convergence asymptotic normality estimator shown hold uniformly large class sparse data generating processes simulation study illustrates finite sample estimation properties proposed estimator coverage properties corresponding confidence intervals finally empirical application estimating convergence gdp crosssection demonstrates practical implementation proposed method key words additive nonparametric models sparse regression inference imperfect model selection jel codes introduction nonparametric estimation econometrics statistics useful applications theory provide functional forms relations relevant observed variables many problems primary quantities interest computed conditional expectation function outcome variable given regressor interest date first version september version february correspondence department economics university thank christian hansen tim conley matt taddy azeem shaikh dan nguyen dan zhou emily oster martin schonger eric floyd kelly reeve seminar participants university western ontario university pennsylvania rutgers university monash university center law economics eth zurich helpful comments gratefully acknowledge financial support eth postdoctoral fellowship damian kozbur case nonparametric estimation flexible means estimating unknown data minimal assumptions econometric models however also important take account conditioning information given variables failing properly control variables lead incorrect estimates effects conditioning information important problem necessary replace simple objective learning conditional expectation function new objective learning family conditional expectation functions indexed paper studies series inference particular case characterized following two main features additively separable meaning functions conditioning variables observable additively separable models convenient many economic problems ceteris paribus effect changing completely described addition major statistical advantage restricting additively separable models individual components estimated faster rates joint estimation family therefore imposing additive separability contexts assuption justified helpful motivation studying framework allow researchers substantial flexibility modeling conditioning information primary object interest framework allows analysis particularly rich big datasets large number conditioning paper formally defined total number terms series expansion allow many possibilities types variables functions covered example approximately linear sense denoting jth component vector asymptotic valid generally also moderate estimation nonparametric regression problems involves least squares estimation performed series expansion regressor variables series estimation described fully section faster rates separable models exist kernel methods marginal integration methods series based estimators general review issues see example textbook additional discussion literature additively separable models provided later introduction many cases larger set covariates lend additional credibility conditional exogeneity assumptions see discussion additively separable dimension sufficiently expressive series expansion must many terms simple consequence curse dimensionality basic mechanical outline estimation inference strategy presented paper proceeds following steps consider approximating dictionaries equivalently series expansions terms given pkk linear combinations used approximating addition consider approximating dictionaries terms qll approximating possibly reduce number series terms way continues allow robust inference requires multiple model selection steps proceed traditional series estimation inference techniques reduced dictionaries strategies form commonly referred selection inference strategies primary targets inference considered paper functionals specifically let leading examples functionals include average derivative difference two distinct interest main contribution paper construction confidence sets cover confidence level moreover construction valid uniformly large class data generating processes allow highdimensional current estimation techniques provide researchers useful tools dimension reduction dealing datasets number parameters exceeds sample techniques require additional structure imposed problem hand order ensure good performance one common structure reliable techniques exist sparsity sparsity means number nonzero parameters small relative sample size setting common techniques include techniques like lasso techniques include dantzig selector scad forward stepwise regression literature nonparametric estimation additively separable models well developed mentioned additively separable models useful since models extremely flexible thus overparameterized likely overfit data leading poor inference sample performance therefore many covariates present regularization necessary lasso shrinkage procedure estimates regression coefficients minimizing quadratic loss function plus penalty size coefficient nature penalty gives lasso favorable property many parameter values set identically zero thus lasso also used model selection technique fits ordinary least squares regression variables estimated lasso coefficients theoretical simulation results performance two methods see among many damian kozbur impose intuitive restriction class models considered result provide higher quality estimates early study additively separable models initiated describe backfitting techniques propose marginal integration methods kernel context consider estimation derivatives components additive models develop local partitioned regression applied generally additive model terms estimation series estimators particularly easy use estimating additively separable models since series terms allocated respective model components general large sample properties series estimators derived many references relative kernel estimation series estimators simpler implement often require stronger support conditions many additional references kernel series based estimation found reference text finally consider estimation additively separable models setting additive components authors propose analyze series estimation approach penalty penalize different additive components paper therefore studies similar setting one constructs valid procedure forming confidence intervals rather focusing estimation error main challenge statistical inference construction confidence intervals model selection attaining robustness model selection errors coefficients small relative sample size statistically indistinguishable zero model selection mistakes model selection mistakes lead distorted statistical inference much way pretesting procedures lead distorted inference intuition formally developed nevertheless given practical value dimension reduction increasing prevalence datasets studying robust selection inference techniques inference techniques active area current research offering solutions problem focus number recent papers see example paper proposes procedure called double selection additively separable model proposed procedure generalization approach named gives robust statistical inference slope parameter treatment variable control variables context partially linear model selection method selects elements two steps step selects terms expansion useful predicting step selects terms expansion useful predicting consequence particular construction using two selection steps terms excluded model selection mistakes twice necessarily negligible effect subsequent statistical double selection replaces step restrictive conditions example conditions constrain nonzero coefficients large magnitudes perfect model selection attained citations ordered date first appearance arxiv authors addressed task assessing uncertainties estimation error model parameter estimates wide variety models high dimensional regressors see example use two model selection steps motivated partially intuition two necessary conditions omitted variables bias occur omitted variable exists correlated treatment correlated outcome selection step addresses one additively separable selection selecting variables useful predicting test function sufficiently general class functions paper suggests simple choice based linear span choice called span theoretical simulation results show suggested choice favorable statistical properties uniformly certain sequences data generating processes working generalization selection dissociates first stage selection final estimation useful several reasons one reason direct extension invariant choice dictionary leads natural consideration general addition applying direct generalization selection may lead poorer statistical performance using larger robust simulation study later paper explores properties next theoretical advantage cases larger gives estimates inference valid weaker rate conditions etc finally working dissociating first stage helps terms organizing arguments proofs particular various bounds developed proof depend notion density within linspan paper proves convergence rates asymptotic normality postnonparametric double selection estimates respectively proofs paper proceed using techniques newey analysis series estimators see ideas belloni chernozhukov hansen analysis selection see along careful tracking notion density set within linear span estimation rates obtained paper match next simulation study demonstrates finite sample performance proposed procedure finally empirical example estimating relationship initial gdp gdp growth countries illustrates use double selection series estimation reduced dictionary section establishes notation reviews series estimation describes series estimation reduced dictionary exposition begins basic assumptions observed data assumption data observed data given iid copies random variables indexed outcome variables explanatory variables interest conditioning variables addition integer general measure space two concerns paper prove regularity right conditions two described model selection steps used obtain asymptotically normal estimates turn construct correctly sized confidence intervals choices possible analysis paper covers general class choices damian kozbur assumption additive separability random variable functions following additive holds traditional series estimation carried performing least squares regression series expansions define dictionary approximating functions pkk qll series functions linear combinations approximate construct matrices let least squares estimate let components corresponding defined quality statistical estimation feasible provided dimension reduction regularization performed dictionary reduction selects new approximating terms reduction comprised subset series terms paper primary objects interest center around convention always take estimate defined analogously traditional series estimate let let components corresponding defined finally consider set one sensible estimate given order use inference approximate expression varib necessary standard expression variance ance var let approximated using delta method let idn projection matrix onto space orthogonal assumption simply rewrites equation stated introduction terms residual ensure uniqueness normalization required common normalization series context sufficient common assumptions one dimensional functionals simplicity additively separable estimate using following span finally let sandwich form mdiag following sections describe dictionary reduction technique along regularity conditions imply practical value results formally justify approximate gaussian inference immediate corollary gaussian limit significance level standard guassian distribution holds dictionary reduction double selection previous section described estimation using generic dictionary reduction section discusses one class possibilities constructing reductions important note coverage probabilities confidence sets depend critically dictionary reduction performed particular naive methods fail produce correct inference formal results expanding point found instance heuristically reason resulting confidence intervals poor coverage properties due model selection mistakes address problem section proposes procedure selecting new procedure generalization methods work context partially linear model methods described rely heavily model selection therefore brief description lasso provided following description lasso uses overall penalty level well penalty loadings follows motivated allowing heteroskedasticity random variable observations lasso estimate penalty parameter loadings defined solution lasso arg min corresponding selected set defined lasso finally corresponding estimator defined arg min required inverse exist may used damian kozbur lasso chosen model selection possibilities several reasons foremost lasso simple computationally efficient estimation procedure produces sparse estimates ability set coefficients identically equal zero particular generally much smaller suitable penalty level chosen second reason sake continuity previous literature lasso used third reason concreteness indeed many alternative estimation model selection procedures select sparse set terms principle replace lasso possible instead consider general model selection techniques course developing subsequent theory however framing discussion using lasso allows explicit calculation bounds explicit description tuning parameters also helpful terms practical implementation procedures proposed quality lasso estimation controlled number different lasso estimations increases increasingly many different variables penalty parameter must increased ensure quality estimation uniformly different penalty parameter must also increased increasing however higher typically leads shrinkage bias lasso estimation therefore given usually chosen large enough ensure quality performance larger see details sake completeness selection procedure reproduced partially linear model specified algorithm selection partially linear model reproduced first stage model selection step perform lasso regression penalty loadings lfs let ifs set selected terms reduced form model selection step perform lasso regression penalty loadings lrf let irf set selected terms selection estimation set ipd ifs irf let qjl estimate based least squares appendix contains details one possible method choosing well lfs lrf arguments show choices tuning parameters given appendix sufficient guarantee centered gaussian sampling distribution simplest generalization selection expand first stage selection step steps precisely perform lasso regression pkk set ifs selected terms define ifs ifs continue reduced form estimation steps approach disadvantages first selected variables depend particular dictionary ideally first stage model selection approximately invariant choice standard errors used inference previous draft paper took approach deriving theoretical results approach requires stronger sparsity assumptions required additively separable instead consider general class test functions concrete classes test functions provided first stage double selection lasso step performed algorithm double selection first stage model selection step perform lasso regression penalty loadings let selected terms let union set selected terms reduced form model selection step perform lasso regression penalty loadings lrf let irf set selected terms selection estimation set irf estimate using based reduced dictionary qjl following several concrete feasible options first option named span option option suggested practical use main option simulation study well empirical example follow span linspan var theory subsequent section general enough consider options might possibly preferred different contexts three additional examples follows graded pkk multiple pkk pkk simple pkk appendix contains full implementation details span option includes one possible method choosing well lfs lrf yield favourable model selection properties discussion important details given text analysis next section gives conditions attains centered gaussian limiting distribution choosing optimally important problem similar problem dictionary span option span used simulation study well empirical example since performed well initial simulations note definition set span depends population quantity var may unknown researcher note however identities covariates selected procedure described appendix invariant rescaling side variable invariance question option optimal likely application dependent order maintain focused question considered detail paper might interest future work damian kozbur consequence method choosing penalty loadings therefore replacing condition var possible option simple direct extension post double selection given set multiple corresponds using multiple dictionaries indexed notation example multiple could include union orthogonal polynomials trigonometric polynomials first stage selection graded appropriate dictionaries nested respect include order set practical choice penalty levels set proposed considered span pkk linspan var reason decomposing span way allow use different penalty levels three sets particular penalty single heteroskedastic lasso described penalty adjusts presence different lasso regressions main proposed estimator sets less conservative penalty level would following continuum lasso result corresponding lasso performance bounds hold uniformly rather implied bounds hold uniformly element subsets model selection assumption see assumption indicates bounds sufficient present purpose simulation study conservative higher choice also considered terms inferential quality noticeable difference two choices penalties data generating processes considered simulation study discussed penalty levels accounting set different lassos estimated simultaneously must higher ensure quality estimation leads higher shrinkage bias decomposition therefore addresses concerns quality estimation shrinkage bias allowing smaller penalty levels used subsets span decomposition fixed finite number terms terms estimation strategy presents additional theoretical difficulties another practical difficulty approach computational infeasible estimate lasso regression every indexed continuum therefore approximation must made reference gives suggestions estimating continuum lasso regressions using grid may computationally expensive even moderately large alternative heuristic approach motivated observation qjl selected context estimating identity selected terms dictionaries may contain term case appended addition rescaling possible sets nonempty intersection causes additional problems normalization ensures indexed compact set chosen described account continuum lassos additively separable important coefficients implementation paper strategy approximating adopted lasso regression run using exactly one test function choice made based likely select qjl relative specifically set linear combination pkk highest marginal correlation qjl approximation first stage model selection step proceeds using place also detailed appendix formal theory subsequent sections proceeds working notion density within broader space approximating functions aside added generality working manner helpful since adds structure proofs isolates exactly density interacts final estimation quality formal theory section additional formal conditions given guarantee convergence asymptotic normality double selection undoubtedly many aspects estimation strategy analyzed include important choices tuning parameters following definition helps characterize smoothness properties target function approximating functions let function define sobolev norm inner maximum ranges assumption regularity nonsingular matrix smallest eigenvalue matrix bounded uniformly away zero addition sequence constants satisfying kbk assumption approximation integer real number sequence vectors depend assumptions would identical assumptions conditioning variable present assumptions require dictionary certain regularity approximate rate quantity dictionary specific explicitly calculated certain cases instance gives possible note values derived particular classes functions containing also gives explicit calculation leading cases power series regression splines next assumption quantifies density within linspan order define following let inf sup damian kozbur assumption density satisfies var constant sup var nothing special constant var mainly tool helping describe density addition mentioned set selected lasso described appendix invariant rescaling side variable result imposing restrictions var without loss generality density assumption satisfied span used since case bounded uniformly hand density assumption may satisfied higher basic simple pkk option next assumptions concern sparse approximation properties two definitions necessary stating assumption first vector called next let denote linear projection operator precisely square integrable random variable defined minimized functions square integrable write assumption sparsity sequence constant following hold sequence vectors support vectors common support sup sup assuming uniform bound sparse approximation error potentially stronger necessary moment writing manuscript author sees pnno theoretical obstacle terms working weaker assumption addition rate imposed order maintain parallel exposition relative term rates instance also replace done comment holds sparse approximation conditions several references prior econometrics literature work sparse approximation conditional expectation rather linear projection context working conditional expectation places higher burden approximating dictionary particular conditional expectation given approximated using terms conditional expectation may potentially require terms approximate interactions taken account potentially requires dictionary contain prohibitively large amount interaction terms reason conditions paper cast terms linear general grows faster every polynomial author sees theoretical obstructions terms applying arguments lasso bounds without conditional expectation assumption key ingredient additively separable next assumption imposes limitations dependence example case element assumption states residual variation linear regression generally assumption requires population residual variation projecting pkk away uniformly one consequence assumption constants freely added therefore requires user enforce normalization condition like simulation study empirical illustration enforce assumption identifiability assumption matrix eigenvalues bounded uniformly away zero addition kbk next condition restricts sample gram matrix second dictionary standard condition nonparametric estimation dictionary gram matrix eventually eigenvalues bounded away zero uniformly high probability matrix rank deficient however setting assure good performance lasso sufficient control certain moduli continuity empirical gram matrix multiple formalizations moduli continuity useful different settings see explicit examples paper focuses simple condition seems appropriate econometric applications particular assumption small submatrices eigenvalues sufficient results follow sparse setting convenient define following sparse eigenvalues positive matrix max min max paper favorable behavior sparse eigenvalues taken high level condition following imposed assumption sparse eigenvalues sequence sparse eigenvalues obey probability assumption requires sufficiently small submatrices large empirical gram matrix condition seems reasonable sufficient results follow informally states small subset covariates suffer multicollinearity problem could shown hold primitive conditions adapting arguments found build upon results see also argument expression stays suitably small note expression sum mean zero independent random variables present context sparse eigenvalue definition refers number nonzero components vector damian kozbur assumption model selection performance constants bounds log log hold probability standard lasso estimation rates one outcome considered log sum squared prediction errors number selected covariates therefore uniform measure loss estimation quality stemming fact lasso estimation performed rather single outcome similarly measures number unique selected first stage lasso estimations choice present assumptions generality model selection techniques also applied however verification high level bounds available additional regularity lasso estimation one reference performance bounds continuum lasso estimation steps paper authors provide formal conditions specifically assumption prove statement assumption holds bounds reference correspond taking important note conditions slightly stringent since authors assume taken approximate conditional expectation given rather linear projection finite grows polynomially possible regularity conditions main theoretical difficulty verifying assumption using primitive conditions showing size set stays suitably small prove certain performance bounds continuum lasso estimates assumption dim fixed state argument would hold certain sequences dim also proves size supports lasso estimates stay bounded uniformly constant multiple depend however prove size union remains similarly bounded therefore results imply existence finite value later bound required analysis proposed estimator finite approximation span like simple difficulty calculating bounds total number distinct selected terms regularity conditions standard literature satisfies implied constants terms bounded uniformly particular finite possible take paper derive bound span would likely lie outside scope project valid alternative verifiable bounds union selected covariates possible report estimates using simple event span span otherwise increasing threshold function additively separable linspan var coincides dense possible assumption weakened following assumption alternative model selection performance suppose linspan var let nonrandom fixed finite subset elements constants bounds log sup log hold probability assumption weaker assumption however assumption easily verified primitive conditions using finite sets statements attained standard conditions provided penalty adjusting different lasso estimations used hand using conservative penalty continuum lasso estimations like span would result currently proof statement statement hold simultaneously conditions standard econometrics literature interesting note requirements satisfy assumption essentially pointwise bounds predictive performance set lasso estimations along uniform bound identity selected covariates contrast prove uniform bounds lasso estimations along pointwise bounds identity selected covariates practice verification condition assumption could potentially useful would allow researcher use penalty level smaller factor would ultimately allow robustness without increasing variability final estimator choice penalty parameters given appendix span option conditions assumption verified regularity conditions like given yield furthermore condition mentioned previous assumption verified option like page used importantly assumption serves plausible model selection condition sufficient proving results follow next assumption describes moment conditions needed applying certain laws large numbers instance quantities qjl assumption moment conditions following moment conditions hold qjl bounded away zero uniformly bounded uniformly qjl bounded away zero uniformly bounded uniformly first statement assumption may also seen stricter identifiability condition condition residual variation rules situations instance qjl note given identifiability assumption direct assumption damian kozbur needed corresponding third moment since instead reference bound used final assumption statement theorem rate conditions assumption rate conditions following rate conditions hold log log log log first statement ensures sparse eigenvalues remain high probability sets whose size larger selected covariates second statement used conjunction moment conditions allow use moderate deviation bounds following third fourth conditions assumption sparse approximation error final two assumptions restrict size quantities depending relative assumptions unraveled certain choices dictionaries example noted taken using simple option gives conditions reduced log first result preliminary result gives bounds convergence rates estimator used course proof theorem main inferential result paper proposition direct analogue rates given theorem considers estimation conditional expectation without model selection conditioning set rates obtained proposition match rates state let distribution function random variable addition let theorem assumptions double selection estimate function satisfies following bounds next formal results concern inference recall estimated inference conducted via estimator described earlier sections assumption moments asymptotic normality bounded var bounded away zero note conditions require bounded strengthened condition needed consistent variance estimation order construct bound quantity following assumptions functional imposed regularity assumptions imply attains certain degree smoothness example imply differentiable additively separable assumption differentiability real valued functional either linear following conditions hold linear function linear constants holds function related functional derivative following assumption imposes regularity continuity derivative shorthand let next rate conditions used ensure estimates undersmoothed rate condition ensures estimation bias heuristically captured converges zero faster estimation standard error assumption undersmoothing rate condition next rate condition used order bound quantities appearing proof theorem demonstrated case assumption rate conditions unraveled certain choices assumption rate conditions asymptotic normality log log log final two conditions divide cases considered two classes first class covered assumption functionals fail differentiable therefore estimated parametric rate second class covered assumption attain rate one example functional interest evaluation point case fails estimated parametric rate general circumstances second example weighted average derivative weight function satisfies regularity conditions assumption holds differentiable vanishes outside compact set density bounded away zero wherever positive case change variables provided continuously distributed non vanishing density one possible set sufficient conditions weighted average derivative achieve assumption regularity absence differentiability constant dependent holds assumption conditions finite nonzero pkk pkk every nally matrix var finite nonzero theorem establishes validity standard inference procedure model selection well validity plug variance estimator damian kozbur theorem assumptions double selection estimate function satisfies addition assumptions simulation study results stated previous section suggest double selection series estimation exhibit good inferential properties additively separable conditional expectation models sample size large following simulation study conducted order illustrate implementation study performance outlined procedure simulation study divided two parts first part compares several alternative estimators double selection second part compares several double selection estimates using different choices part demonstrates finite sample benefits using span option relative direct generalization selection estimation using simple option following process generates data simulation sin sin corr stair stair stair tanh tanh study performs simulations two settings parameter considered finally sparsity level set within data generating process simulation replications performed additively separable data generating process quite complicated designed order create correlations covariates various transformations allows data generating process highlight many different statistical problems arise using double selection alternative estimation techniques one simulation study despite complicated formulas joint distribution realizations appear natural scatter plots one sample showing respective bivariate distributions provided figure figure provides picture graph simulations evaluate estimation defined order avoid complications replication expectation thus true calculated empirical distribution within simulation first part simulation study considers performances five estimator reduced series estimator based initial dictionaries consisting cubic spline expansion linear expansion oracle estimator infeasible sets estimator serves benchmark comparison estimates correct support known span double estimator selects using double selection given span option described paper naive estimator selects one model selection step performing lasso ols estimator uses words estimator reduce dictionary estimation strategy calculated provided targeted undersmoothing estimator implements alternative inferential procedure dense functionals parameters procedure proposed described possibility calculate population expectation assumption researcher knows population distribution causes complication distribution unknown estimated must however taken account likely sensible estimators beyond considered simulation section pointed anonymous reviewer estimators may include propensity score matching continuous variable though approach may work well context exactly usually seen propensity score matching particular assumptions require unconfoundedness conditions addition propensity score techniques commonly applied discrete treatement variables work propensity score matching continuous treatment example see require estimation conditional density treatment setting estimating conditional density given would likely introduce complications beyond scope paper damian kozbur detailed implementation descriptions provided appendix estimators choice made using rule first initial dictionary reduction initial selected oracle initial span double naive estimators initial based lasso implemented appendix ols initial next bic used choose expansion comparison estimators standard selection econometrics literature oracle estimator seen benchmark known provide good estimates true set known naive estimator expected perform poorly since uniformly valid estimator susceptible arising model selection mistakes ols expected perform poorly due potential problems related overfitting estimator procedure called targeted undersmoothing looks correct distortions inference model selection mistakes targeted undersmoothing appends covariates significantly affect value functional initially selected model see appropriate functionals highdimensional models depend growing number parameters dense functionals therefore potentially sensible procedure inference estimator detailed appendix simulation results report several quantities measure performance bias estimator results report standard deviation estimates estimates confidence interval length estimates rejection frequencies null level mean number series terms used mean number series terms selected original integrated squared error simulation results reported figure figure figures display mentioned simulation results changing horizontal note also across estimators reported quantities identical example point estimates identical naive point estimates selected identical naive estimates well double selection estimates simulations double selection estimates behave similarly oracle estimates ols estimates wide confidence intervals relative double selection estimation similar coverage properties final estimator targeted undersmoothing conservative terms coverage substantially larger intervals every case hand naive estimator poor coverage properties naive estimator failing control correct covariates increase leads increasing bias highlights fact simply producing undersmoothed estimates increasing may adequate reducing bias making quality statistical inference possible setting since magnitude coefficients joint distribution relevant covariates fixed simulations therefore sufficiently large relevant covariates would identified high probability selection estimators would perform similarly simulation study therefore identifying differences finite sample performance additively separable figure simulation results figure presents simulation results estimation cases according data generating process described text estimates presented five estimators oracle double pnd span naive ols targeted undersmoothing described text first plot shows standard deviation respective estimates second plot shows bias estimates third plot shows confidence interval length estimates fourth plot shows rejection frequencies null level test fifth plot shows mean number series terms used sixth plot shows mean number series terms selected seventh plot shows root mean integrated squared error figures based simulation replications always indexed horizontal axis damian kozbur figure simulation results figure presents simulation results estimation cases according data generating process described text estimates presented four estimators oracle double pnd span naive targeted undersmoothing described text first plot shows standard deviation respective estimates second plot shows bias estimates third plot shows confidence interval length estimates fourth plot shows rejection frequencies null level test fifth plot shows mean number series terms used sixth plot shows mean number series terms selected seventh plot shows root mean integrated squared error plot horizontal axis denotes sample size figures based simulation replications always indexed horizontal axis additively separable second part simulation study compares four double selection estimators use different specifications span double estimator identical span double estimator first part simulation conservative span double estimator uses span option decomposition span penalty applied conservative explicitly aimed achieve lasso performance bounds hold uniformly simple double estimator uses span uses simple alternative spline basis simple double estimator uses different basis selection decomposition applied order obtain orthonormal columns next simple used new orthogonalized data importantly new spans linear space previous estimators estimates second part simulation presented figures note estimators identical regards hence one curve visible corresponding plots addition conservative span span estimators similar performance terms standard deviation bias interval length rejection frequency integrated squared error two estimators practically indistinguishable except terms number elements select give numerically identical estimates confidence intervals however differences small seen figures noticeable differences performance estimators span option able identify highest number relevant covariates followed conservative span option simple option alternative spline basis simple option span conservative span simple double selection procedures exhibit favorable finite sample properties data generating process particular estimators calculated rejection frequencies move towards increases contrast alternative spline basis simple double selection procedure poor finite sample performance unlikely projection new orthogonalized basis onto good sparse representation causes increased model selection mistakes first stage unlike partially linear model mistakes accumulate cause severe bias since number first stage selection steps growing note alternative spline basis estimator similar performance naive estimator first part simulation study span conservative span options offer opportunity potentially add additional robustness options select variables simple option evidence simulation study using span option conditioning variables extent rejection frequencies become severely distorted variability increases undesirable level damian kozbur figure simulation results figure presents simulation results estimation cases according data generating process described text estimates presented four double selection pnd estimators simple span conservative span alternative spline simple described text first plot shows standard deviation respective estimates second plot shows bias estimates third plot shows confidence interval length estimates fourth plot shows rejection frequencies null level test fifth plot shows mean number series terms used sixth plot shows mean number series terms selected seventh plot shows root mean integrated squared error plot horizontal axis denotes sample size figures based simulation replications always indexed horizontal axis additively separable figure simulation results figure presents simulation results estimation cases according data generating process described text estimates presented four double selection pnd estimators simple span conservative span alternative spline simple described text first plot shows standard deviation respective estimates second plot shows bias estimates third plot shows confidence interval length estimates fourth plot shows rejection frequencies null level test fifth plot shows mean number series terms used sixth plot shows mean number series terms selected seventh plot shows root mean integrated squared error plot horizontal axis denotes sample size figures based simulation replications always indexed horizontal axis damian kozbur figure simulation study figure depicts function used simulation study figure simulation study joint covariate distribution figure depicts joint distribution first covariates described text plots generated one sample size additively separable figure gdp growth results empirical example gdp growth section applies double selection international economic growth example data comes barro lee dataset contains panel countries period example also considered apply lasso techniques context highdimensional linear model purpose locating important variables predictive gdp growth rates considers growth gdp per capita dependent variable period growth rate gdp period commonly defined log studying factors influence growth gdp problem central importance economics difficulty studying problem empirically level number observations limited total number countries time number potential factors influence gdp growth large leads naturally need regularize econometric estimation data countries example specifically studies relation initial gdp level subsequent gdp growth presence large number determinants gdp growth interest studying particular question testing fundamental macroeconomic theory convergence convergence predicts countries high initial gdp show lower levels gdp growth conversely countries low initial gdp show higher levels gdp growth many references assumptions imply convergence see references therein analysis considers model covariates allows total complete observations since comparably large relative dimension reduction setting necessary goal select subset covariates briefly compare resulting predictions made growth literature see contain complete definitions discussion variables estimated model given specification damian kozbur log gdpi log gdpi log gdpi denotes sample mean observed covariates enter linearly expansion assumed estimation performed using cubic splines detailed appendix normalized estimates several average derivatives effect initial gdp gdp growth constructed using postnonparametric double selection presented table addition scatter plot primary variables interest well estimate shown figure nonlinear specification allows testing several hypotheses related convergence gdp include hypothesis conditional convergence depend initial gdp related idea poverty trap countries smaller initial gdp exhibit less convergence relationship initial gdp gdp growth may locally flat see reference text additional background details conditional convergence could also imply high end initial gdp distribution gdp growth locally flat existence conditional convergence based initial gdp tested using nonlinear specification order study overall convergence data divided quartiles average derivative estimated within quartile addition overall average derivative estimated support initial gdp observations respective average derivatives compared estimates based double selection presented table estimate overall weighted average derivative std err estimate negative statistically significant result consistent convergence theory addition average derivative calculated various smaller ranges initial gdp empirical distribution initial gdp divided quartiles estimates weighted average derivatives calculated within quartile estimated average derivatives std err std err std err std err test hypothesis average derivative equal average derivative rejects null level stat test hypothesis average derivative equal average derivative fails reject null level stat overall average derivative estimate negative statistically significant estimates also agree thus support previous findings reported relied reasoning covariate selection addition analysis supports claim conditional convergence nonlinear initial gdp flatter countries lower initial gdp calculated alternative null average derivative additively separable table estimation results gdp example estimates average derivative average derivative additional selected variables life expectancy average schooling years female population age infant mortality rate female gross enrollment ratio secondary education male gross enrollment ratio secondary education total fertility rate population proportion additional hypothesis tests deriv deriv deriv deriv note double selection estimates basis conclusion paper considers problem selecting conditioning set context nonparametric regression convergence rates inference results provided series estimators primary component interest additively separable models conditioning information finite sample performance several double selection estimators evaluated simulation study overall proposed span option good estimation inferential properties data generating processes considered damian kozbur appendix implementation details lasso implementation details lasso implementation given penalty every case penalty loadings chosen described one small modification procedure suggested requires initial penalty loadings constructed using initial estimates followed iterative regression residuals suggestion use procedure instead taken linear regression residuals regressing outcome marginally correlated qjl highest qjl modification also used penalty level choice single outcome every case single outcome variable considered isolation includes reduced form selection step selection step corresponding lasso implemented penalty described ease reference note suggest given classo tuning parameters every instance paper classo used penalty level choice simple case lasso regressions run simultaneously case given classo used penalty level choice implementation span span option used span decomposed span component corresponding penalty level applied within component first component classo second component classo third component classo following procedure used approximating case component contains continuum test functions lasso regression likely select qjl specifically set linear combination pkk highest marginal correlation qjl approximation first stage model selection step proceeds using place penalty level choice conservative span option used decomposed component corresponding penalty level applied within component first component classo second component classo third component classo order approximate variables selected continuum lasso estimates indexed identical procedure span option used note difference conservative span option span option additively separable implementation details every simulation empirical example constructed using cubic expansion fixed approximating dictionary chosen according following procedure knots points chosen according following rule set tmax tmin let constants set constants serve insert knot points density higher choices determined uniquely condition endpoints satisfy tmin tmax next formulation used given recursive formulation set set outside addition spline order set dictionary completed adding additional terms chosen according following procedure first initial set terms initial selected case initial contains terms irf terms selected lasso regression next initial value chosen minimize bic using initial simulation constrained finally order ensure undersmoothing study set targeted undersmoothing implementation details following procedure used estimate targeted undersmoothing specifically see confidence intervals let corresponding confidence interval using terms components corresponding full confidence interval defined convex hull irf implementation truncated confidence incb terval calculated instead done simulation irf run time reduces order day order month therefore helps facilitate easier replicability changing code calculate full confidence intervals trivial also highlights computing speed another advantage double procedure relative certain settings terms approximation error full estimator implemented case replications full confidence intervals well truncated confidence intervals made false rejections addition average interval length full intervals average interval length truncated intervals therefore truncated full confidence intervals show similar performance instance damian kozbur appendix proofs preliminary setup additional notation throughout course proof much reference possible made results done order maximize clarity present better picture overall argument many cases appealing directly arguments possible many bounds required deriving asymptotic normality series estimators depend properties less direct appeal bounds original selection argument possible since arguments track notions quantities stemming like however main idea decomposing components span orthogonal remains theme throughout proofs function let denote vector similarly let addition define following quantities let matrix pkk let let let let let partitioned let partitioned let let let let let let function let let assume without loss generality idk identity matrix order reason without loss generality dictionary used estimation used first stage model selection addition assume without loss generality idk throughout exposition common naming convention various regression coefficients quantities form always denotes sample regression coefficients regressing variable components specified implies quantities equivalent since specified components regressed addition equivalent next quantities form without hat accent population quantities defined text additively separable preliminary lemmas lemma assumptions theorem log log krm kmmk log log log log log log log krm kmma log log log krm proof statement lemma sufficient qjl wik two conditions together qjl wki log rate condition log note qjl wik bounded away zero assumption addition inequality damian kozbur implies first condition holds second condition given assumptions statement follows similarly statement statement statement follows directly fact bounded along dim krmk allowing use chebyshev inequality statement facts kwi statement first note following two hold linspan corresponding expansion max krg krg show first two statements note establishes first claim turn second claim note using density assumption vector remainder sufficiently small next looking expansion combining expression gives additively separable applying inequality fact projection hence gives bound max krg krg applied directly kmmk assumption note corresponding rmk satisfy krmk bound log assumption note taking rmk feasible assumption result follows statement kmq first two terms log reasoning statement addition assumption gives log damian kozbur statement kpmk kmk mmk mmk rmk log log statement proven analogously statement statement max max max max log statement proven analogously statement statements proven analogously statements statement rmk krm additively separable density assumption krmk implies lemma mmkf proof statement damian kozbur statement mmkf mmk mmk mmkf statement krm krm krm krm first term last line bounded krm second term therefore statement max additively separable statement max statement pekf statement pekf krm pekf krm krm krm first term last line bounded krm turning second term therefore damian kozbur statements argument identical argument statements adjusting appropriately fact rather following corollaries follow directly assumed rate conditions bounds used proof theorems corollary assumptions theorem corollary assumptions theorem proof theorem lemma proof argument theorem gives bound next using decomposition write idn triangle inquality bounds three terms established along last statement holds assumed rate conditions give applying expansion matrix inversion function around idk idk idk idk idk idk sum given probability absolutely convergent relative frobenius norm addition bound idk idk kidk kidk note since minimal eigenvalues bounded assumption invertible probability approaching reference follows later uses fact works event event probability fact used several times however use implicitly reference arguments lemma proof arguments bounds follows previous lemmas assumed rate conditions additively separable lemma proof assumption idempotency mmp lemma eigenvalues bounded probability approaching proof lemma proof note triangle inequality conjuction bounds described previous three lemmas give result final statement theorem follows bound using arguments proof theorem recall let decompose quantity lemma proof follows arguments given proof theorem note statement contain reference random quantities lemma proof bounds given theorem imply identical reasoning given theorem since references uses bound prove analogous result damian kozbur last step show lemma proof note expanded addition gives equation gives decomposition right hand side two terms next bounded separately proceeding note followb ing bounds hold arguments consider first term max max handle term first bound next consider additively separable next consider last remaining term central limit result shown bound equation array holds fact note term satisfies conditions central limit theorem arguments given previous three lemmas prove next set arguments bound statement assumption define event define addition define infeasible sample analogue lemma proof statement case linear therefore consider case linear using arguments identical probability statement follows arguments statement follows arguments statement immediate implication statement lemma proof first note max max max damian kozbur first term bound maxi assumption next max max max next max putting together follows assumed rate conditions max next let lemma states addition let lemma proof additively separable terms right hand side bounded consider first term expanding gives note arguments five terms bounded order appearence max max max max max max second term bounded max max max max max last bounds come rate condition assumption results give conclusion calculations give rates convergence cases assumption assumption well proof second statement theorem use arguments concludes proof damian kozbur references aghion howitt economics growth mit press donald andrews whang additive interactive regression models circumvention curse dimensionality econometric theory donald andrews asymptotic normality series estimators nonparametric semiparametric regression models econometrica bai forecasting economic time series using targeted predictors journal econometrics bai boosting diffusion indices journal applied econometrics barro lee data set panel countries nber http robert barro lee losers winners economic growth working paper national bureau economic research april belloni chen chernozhukov hansen sparse models methods optimal instruments application eminent domain econometrica arxiv belloni chernozhukov least squares model selection sparse models bernoulli arxiv belloni chernozhukov hansen program evaluation causal inference data econometrica belloni chernozhukov hansen lasso methods gaussian instrumental variables models arxiv http belloni chernozhukov hansen inference sparse econometric models advances economics econometrics world congress econometric society august alexandre belloni victor chernozhukov high dimensional sparse econometric models introduction pages springer berlin heidelberg berlin heidelberg alexandre belloni victor chernozhukov denis chetverikov kengo kato new asymptotic theory least squares series pointwise uniform results journal econometrics high dimensional problems econometrics alexandre belloni victor chernozhukov christian hansen inference treatment effects selection amongst controls application abortion crime review economic studies alexandre belloni victor chernozhukov christian hansen damian kozbur inference panel models application gun control journal business economic statistics bickel ritov tsybakov simultaneous analysis lasso dantzig selector annals statistics van geer statistics data methods theory applications springer andreas buja trevor hastie robert tibshirani linear smoothers additive models ann bunea tsybakov wegkamp sparsity oracle inequalities lasso electronic journal statistics bunea tsybakov wegkamp aggregation sparsity via penalized least squares proceedings annual conference learning theory colt lugosi simon eds pages bunea tsybakov wegkamp aggregation gaussian regression annals statistics tao dantzig selector statistical estimation much larger ann chen economic growth robert barro xavier journal economic dynamics control may chen linton nonparametric estimation additive separable regression models wolfgang michael schimek editors statistical theory computational aspects smoothing pages heidelberg additively separable norbert christopeit stefan hoderlein local partitioned regression econometrica dennis cox approximation least squares regression nested subspaces ann brian eastwood ronald gallant adaptive rules seminonparametric estimators achieve asymptotic normality econometric theory ildiko frank jerome friedman statistical view chemometrics regression tools technometrics hansen kozbur misra targeted undersmoothing arxiv june trevor hastie robert tibshirani generalized additive models rejoinder statist trevor hastie robert tibshirani jerome friedman elements statistical learning data mining inference prediction springer new york jian huang joel horowitz fengrong wei variable selection nonparametric additive models ann jian huang joel horowitz fengrong wei variable selection nonparametric additive models ann guido imbens keisuke hirano propensity score continuous treatments adel javanmard andrea montanari confidence intervals hypothesis testing highdimensional regression journal machine learning research jing shao qiying wang large deviations independent random variables ann keith knight shrinkage estimation nearly singular designs econometric theory koltchinskii sparsity penalized empirical risk minimization ann inst poincar probab hannes leeb benedikt one estimate unconditional distribution estimators econometric theory jeffrey scott racine nonparametric econometrics theory practice princeton university press princeton lounici convergence rate sign concentration property lasso dantzig estimators electron lounici pontil tsybakov van geer taking advantage sparsity learning meinshausen recovery sparse representations data annals statistics whitney newey convergence rates asymptotic normality series estimators journal econometrics benedikt confidence sets based sparse estimators necessarily large ser mathieu rosenbaum alexandre tsybakov sparse recovery matrix uncertainty annals statistics rudelson zhou reconstruction anisotropic random measurements ieee transactions information theory june mark rudelson roman vershynin sparse reconstruction fourier gaussian measurements communications pure applied mathematics eric stefan sperlich estimation derivatives additive separable models statistics charles stone additive regression nonparametric models annals statistics tibshirani regression shrinkage selection via lasso roy statist soc ser van geer generalized linear models lasso annals statistics sara van geer peter bhlmann yaacov ritov ruben dezeure asymptotically optimal confidence regions tests models ann damian kozbur wainwright sharp thresholds noisy recovery sparsity using quadratic programming lasso ieee transactions information theory may lijian yang stefan sperlich wolfgang hrdle derivative estimation testing generalized additive models journal statistical planning inference zhang huang sparsity bias lasso selection linear regression ann zhang stephanie zhang confidence intervals low dimensional parameters high dimensional linear models journal royal statistical society series statistical methodology zhou restricted eigenvalue conditions subgaussian matrices | 10 |
asymptotics high dimensional regression fixed design results lihua peter noureddine dec department statistics university california berkeley december abstract investigate asymptotic distributions coordinates regression moderate regime number covariates grows proportionally sample size appropriate regularity conditions establish asymptotic normality regression assuming matrix proof based inequality chatterjee analysis karoui relevant examples indicated show regularity conditions satisfied broad class design matrices also show counterexample namely design emphasize technical assumptions artifacts proof finally numerical experiments confirm complement theoretical results introduction statistics long history huber wachter considerable renewed interest last two decades many applications researcher collects data represented matrix called design matrix denoted well response vector aims study connection linear model among popular models starting point data analysis various fields linear model assumes coefficient vector measures marginal contribution predictor random vector captures unobserved errors aim article provide valid inferential results features example researcher might interested testing whether given predictor negligible effect response equivalently whether similarly linear contrasts might interest case group comparison problem first two predictors represent feature collected two different groups defined arg min xti contact support grant frg gratefully acknowledged grant frg gratefully acknowledged support grant nsf gratefully acknowledged ams msc primary secondary keywords robust regression statistics second order inequality analysis support denotes loss function among popular estimators used practice relles huber particular famous least square estimator lse intend explore distribution based achieve inferential goals mentioned approach asymptotic analysis assumes scale problem grows infinity use limiting result approximation regression problems scale parameter problem sample size number predictors classical approach fix let grow infinity shown relles yohai huber consistent terms norm asymptotically normal regime asymptotic variance approximated bootstrap bickel freedman later studies extended regime grow infinity converges yohai maronna portnoy mammen consistency terms norm asymptotic normality validity bootstrap still hold regime based results construct var confidence interval simply var calculated bootstrap similarly calculate hypothesis testing procedure ask whether inferential results developed assumptions software built top relied moderate highdimensional analysis concretely study software built upon assumption relied results random matrix theory pastur already offer answer negative side many questions multivariate statistics case regression subtle instance standard degrees freedom adjustments effectively take care many problems nice property extend general regression questions raised becomes natural analyze behavior performance statistical methods regime fixed indeed help keep track inherent statistical difficulty problem assessing variability estimates words assume current paper let grows infinity due identifiability issues impossible make inference without structural distributional assumptions discuss point details section thus consider regime call moderate regime regime also natural regime random matrix theory pastur wachter johnstone bai silverstein shown asymptotic results derived regime sometimes provide extremely accurate approximations finite sample distributions estimators least certain cases johnstone small qualitatively different behavior moderate regime first longer consistent terms norm risk tends quantity determined loss function error distribution complicated system equations karoui karoui bean prohibits use standard techniques assess behavior estimator also leads qualitatively different behaviors residuals moderate dimensions contrast case relied give accurate information distribution errors however seemingly negative result exclude possibility inference since still consistent terms norms particular norm thus least hope perform inference coordinate second classical optimality results hold regime regime maximum likelihood estimator shown optimal huber bickel doksum words error distribution known associated loss log asymptotically efficient provided design appropriate type density entries however moderate regime shown optimal loss longer function complicated explicit form bean least certain designs suboptimality maximum likelihood estimators suggests classical techniques fail provide valid intuition moderate regime third joint asymptotic normality random vector may violated fixed design matrix proved huber pioneering work general negative result simple consequence results karoui exhibit anova design see even marginal fluctuations gaussian contrast random design show jointly asymptotically normal design matrix elliptical general covariance using stochastic representation well elementary properties vectors uniformly distributed uniform sphere see section karoui supplementary material bean details contradict huber negative result takes randomness account huber result takes randomness account later karoui shows coordinate asymptotically normal broader class random designs also elementary consequence analysis karoui however best knowledge beyond anova situation mentioned distributional results fixed design matrices topic article last least bootstrap inference fails regime shown bickel freedman residual bootstrap influential work recently karoui purdom studied results general showed commonly used bootstrapping schemes including residual bootstrap jackknife fail provide consistent variance estimator hence valid inferential statements latter results even apply marginal distributions coordinates moreover simple design independent modification achieve consistency karoui purdom contributions summary behavior estimators consider paper completely different moderate regime counterpart regime discussed next section moving one step moderate regime interesting practical theoretical perspectives main contribution article establish asymptotic normality certain fixed design matrices regime technical assumptions following theorem informally states main result theorem informal version theorem section appropriate conditions design matrix distribution loss function max dtv var dtv total variation distance denotes law worth mentioning result extended finite dimensional linear contrasts instance one might interested making inference problems involving group comparison result extended give asymptotic normality besides main result several contributions first use new approach establish asymptotic normality main technique based secondorder inequality sopi developed chatterjee derive among many results fluctuation behavior linear spectral statistics random matrices contrast classical approaches central limit theorem inequality capable dealing nonlinear potentially implicit functions independent random variables moreover use different expansions residuals based double ideas introduced karoui contrast classical expansions see aforementioned paper informal interpretation results chatterjee hessian nonlinear function random variables consideration sufficiently small function acts almost linearly hence standard central limit theorem holds second best knowledge first inferential result fixed non design moderate regime fixed designs arise naturally experimental design conditional inference perspective inference ideally carried without assuming randomness predictors see section details clarify regularity conditions asymptotic normality explicitly checkable lse also checkable general error distribution known also prove conditions satisfied broad class designs design described section exhibits situation distribution going asymptotically normal results theorem somewhat surprising complete inference need asymptotic normality asymptotic bias variance suitable symmetry conditions loss function error distribution shown unbiased see section details thus left derive asymptotic variance discussed end section classical approaches bootstrap fail regime classical results continue hold discuss section sake completeness however result briefly touch upon variance estimation section derivation general situations beyond scope paper left future research outline paper rest paper organized follows section clarify details mentioned current section section state main result theorem formally explain technical assumptions show several examples random designs satisfy assumptions high probability section introduce main technical tool inequality chatterjee apply first step prove theorem since rest proof theorem complicated lengthy illustrate main ideas appendix rigorous proof left appendix section provide reminders theory estimation sake completeness taking advantage explicit form section display numerical results proof results stated appendix numerical experiments presented appendix details background moderate regime informative type asymptotics section mentioned ratio measures difficulty statistical inference moderate regime provides approximation finite sample properties difficulties fixed level original problem intuitively regime capture variation finite sample problems provide accurate approximation illustrate via simulation consider study involving participants variables either use asymptotics fixed grows infinity fixed grows infinity perform approximate inference current software rely lowdimensional asymptotics inferential tasks evidence yield accurate inferential statements ones would obtained using moderate dimensional asymptotics fact numerical evidence johnstone karoui bean show reverse true exhibit numerical simulation showing consider case entries one realization matrix generated gaussian mean variance entries different error distributions use statistics quantify distance finite sample distribution two types asymptotic approximation distribution specifically use huber loss function default parameter huber specifically generate three design matrices small sample case sample size dimension asymptotics fixed sample size dimension asymptotics fixed sample size dimension generated one realization standard gaussian design treated fixed across repetitions design matrix vectors appropriate length generated entries entry either standard normal distribution standard cauchy distribution use response equivalently assume obtain repeating procedure times results replications three cases extract first coordinate estimator denoted kolmogorovsmirnov statistics obtained max max empirical distribution compare accuracy two asymptotic regimes comparing smaller value ksi better approximation figure displays results error distributions see gaussian errors even errors approximation uniformly accurate widely used approximation cauchy errors approximation performs better moderatedimensional one small worsens ratio large especially close moreover grows two approximations qualitatively different behaviors approximation becomes less less accurate approximation suffer much deterioration grows qualitative quantitative differences two approximations reveal practical importance exploring asymptotic regime see also johnstone random fixed design discussed section assuming fixed design random design could lead qualitatively different inferential results random design setting considered generated super population example rows regarded sample distribution known partially known researcher situations one uses techniques stone pairs bootstrap regression efron statistics distance small sample large sample distribution normal cauchy kappa asym regime fixed fixed figure axpproximation accuracy asymptotics asymptotics column represents error distribution represents ratio dimension sample size represents statistic red solid line corresponds approximation blue dashed line corresponds approximation efron sample splitting wasserman roeder researcher effectively assumes exchangeability data xti naturally compatible assumption random design given extremely widespread use techniques contemporary machine learning statistics one could argue random design setting one modern statistics carried especially prediction problems furthermore working random design assumption forces researcher take account two sources randomness opposed one fixed design case hence working random design assumption yield conservative confidence intervals words settings researcher collects data without control values predictors random design assumption arguably natural one two however understood almost decade common random design assumptions mean variance moments well behaved suffer considerable geometric limitations substantial impacts performance estimators considered paper karoui confidence statements derived kind analysis relied performing graphical tests data see karoui geometric limitations simple consequences concentration measure phenomenon ledoux hand fixed design setting considered fixed matrix case inference takes randomness consideration perspective popular several situations first one experimental design goal study effect set factors controlled experimenter response contrast observational study experimenter design experimental condition ahead time based inference target instance oneway anova design encodes covariates binary variables see section details fixed prior experiment examples include anova designs factorial designs designs etc scheffe another situation concerned fixed design survey sampling inference carried conditioning data cochran generally order avoid unrealistic assumptions making inference conditioning design matrix necessary suppose linear model true identifiable see section details information contained conditional distribution hence information marginal distribution redundant conditional inference framework robust data generating procedure due irrelevance also results based fixed design assumptions may preferable theoretical point view sense could potentially used establish corresponding results certain classes random designs specifically given marginal distribution one prove satisfies assumptions fixed design high probability conclusion fixed random design assumptions play complementary roles settings focus least understood two fixed design case paper modeling identification parameters problem identifiability especially important fixed design case define population arg min xti one may ask whether regardless fixed design case provide affirmative answer following proposition assuming symmetric distribution around even proposition suppose full column rank assume even convex function regardless choice proof left appendix worth mentioning proposition requires marginals symmetric impose constraint dependence structure strongly convex consequence condition satisfied provided positive probability asymmetric may still able identify random variables contrast last case incorporate intercept term shift towards centroid precisely define arg min xti proposition suppose full column rank function unique minimizer uniquely defined proof left appendix example let minimizer median unique positive density worth pointing incorporating intercept term essential identifying instance case longer equals proposition entails intercept term guarantees although intercept term depends choice unless conditions imposed neither symmetric identified previous criteria depends nonetheless modeling perspective popular reasonable assume symmetric many situations therefore proposition proposition justify use cases derived different loss functions compared estimating parameter main results notation assumptions let xti denote row denote column throughout paper denote xij entry design matrix removing column xti vector xti removing entry associated loss function defined arg min xtk arg min xtk define first derivative write simply confusion arise original design matrix contain intercept term simply replace augment vector although special case discuss question intercept section due important role practice equivariance reduction null case invariant choice provided notice target quantity var identifiable discussed section assume without loss generality case assume particular design matrix full column rank xtk arg min similarly define version arg min xtk based notations define full residuals xtk residual xtk three diagonal matrices defined diag diag diag say random variable addition use represent indices parameters interest intuitively entries would require stringent conditions asymptotic normality finally adopt landau notation addition say similarly say simplify logarithm factors use symbol polylog denote factor upper bounded log similarly use polylog denote factor lower bounded log technical assumptions main result stating assumptions need define several quantities interest let largest resp smallest eigenvalue matrix canonical basis vector let finally let max max max cov based quantities defined state technical assumptions design matrix followed main result detailed explanation assumptions follows exists positive numbers polylog polylog smooth functions polylog moreover assume mini var polylog polylog xjt polylog polylog polylog theorem assumptions max dtv var dtv supa total variation distance provide several examples assumptions hold section also provide example asymptotic normality hold section shows assumptions artifacts proof technique developed probably many situations asymptotic normality hold even discussion assumptions discuss assumptions assumption implies boundedness derivatives upper bounds satisfied loss functions including loss smoothed loss smoothed huber loss etc lower bound implies strong convexity required technical reasons removed considering first taking appropriate limits karoui addition paper consider smooth loss functions results extended case via approximation assumption proposed chatterjee deriving inequality discussed section means results apply nongaussian distributions uniform distribution taking cumulative distribution function standard normal distribution gaussian concentration ledoux see implies thus controls tail behavior boundedness required direct application chatterjee results fact look proof suggests one obtain similar result inequality involving moment bounds would way weaken assumptions permit distributions expected robustness studies since considering strongly convex completely unnatural restrict attention errors furthermore efficiency robustness questions one main reasons consider estimators context potential gains efficiency obtained considering regression bean apply context justify interest theoretical setup assumption completely checkable since depends controls singularity design matrix shown objective function strongly convex smallest eigenvalue hessian matrix everywhere lower bounded polylog assumption controlling left tail quadratic forms fundamentally connected aspects concentration measure phenomenon ledoux condition proposed emphasized random design setting karoui essentially means matrix depend quadratic form xjt order assumption proposed karoui random design settings motivated analysis note maximum linear contrasts whose coefficients depend easily checked design matrix realization random matrix entries instance remark certain applications reasonable make following additional assumption even function symmetric distributions although assumption necessary theorem simplify result assumption full rank denotes equality distribution arg min xti arg min xti arg min xti implies unbiased estimator provided mean case unbiasedness useful practice since theorem reads max dtv var inference need estimate asymptotic variance important remark concerning theorem subset coefficients jnc become nuisance parameters heuristically order identifying one needs subspaces span xjn span xjnc distinguished xjn full column rank xjn denotes columns formally let xjnc xjtnc xjnc xjtnc xjn denotes generalized inverse characterizes behavior xjn removing effect xjnc particular modify assumption polylog polylog able derive stronger result case theorem follows corollary assumptions max dtv var shown hence assumption weaker worth pointing assumption even holds xjcn full column rank case still identifiable still although see appendix details examples throughout subsection except subsubsection consider case realization random matrix denoted distinguished verify assumptions satisfied high probability different regularity conditions distribution standard way justify conditions fixed design portnoy literature regression mestimates random design independent entries first consider random matrix entries proposition suppose entries var zij polylog polylog realization assumptions satisfied high probability practice assumption identical distribution might invalid fact assumptions first part polylog still satisfied high probability assume independence entries boundedness certain moments control rely litvak assumes symmetry entry obtain following result based proposition suppose independent entries zij var zij polylog polylog realization assumptions satisfied high probability conditions proposition add intercept term design matrix adding intercept allows remove assumption zij fact suppose zij symmetric respect potentially according section replace zij zij proposition applied proposition suppose independent entries var arbitrary polylog polylog realization assumptions satisfied high probability dependent gaussian design show assumptions handle variety situations assume observations namely rows random vectors covariance matrix particular show gaussian design satisfies assumptions high probability proposition suppose polylog polylog realization assumptions satisfied high probability result extends design muirhead chapter zij one realization random variable multivariate gaussian distribution vec znt kronecker product turns assumptions satisfied proposition suppose vec polylog polylog realization assumptions satisfied high probability order incorporate intercept term need slightly stringent condition instead assumption prove assumption see subsubsection holds high probability proposition suppose contains intercept term satisfies conditions proposition assume maxi mini polylog realization assumptions satisfied high probability condition satisfied another example exchangeable case equal case eigenvector hence also eigenvector thus multiple condition satisfied elliptical design furthermore move structure generalized elliptical models zij independent random variables zij instance mean variance elliptical family quite flexible modeling data represents type data formed common driven factor independent individual effects widely used multivariate statistics anderson tyler various fields including finance cizek biology posekany context statistics class model used refute universality claims random matrix theory karoui robust regression karoui used elliptical models show limit depends distribution hence geometry predictors studies limited design shown limited statistical interest see also deep classical inadmissibility results baranchik klebanov however show next proposition common factors distort shape asymptotic distribution similar phenomenon happens random design case see karoui bean proposition suppose generated elliptical model zij zij independent random variables taking values zij independent random variables satisfying conditions proposition proposition assume zij independent realization assumptions satisfied high probability thanks fact bounded away proof proposition straightforward shown appendix however refined argument assuming identical distributions relax condition proposition conditions proposition except boundedness assume samples generated distribution independent fixed quantile function continuous realization assumptions satisfied high probability counterexample consider anova situation words let design matrix exactly entry per row whose value let integers let furthermore let constrain taking instance mod easy way produce matrix associated statistical model easy see arg min arg min course standard location problem setting consider remains finite function finitely many random variables general normally distributed concreteness one take case median cdf known exactly elementary order statistics computations see david nagaraja gaussian random variable general fact anova design considered violates assumption since minj show assumption also violated least case see section details comments discussions asymptotic normality high dimensions regime asymptotic distribution easily defined limit terms weak topology van der vaart however regimes dimension grows notion asymptotic distribution delicate conceptual question arises fact dimension estimator changes thus distribution serve limit denotes law one remedy proposed mallows framework triangular array ewn ewn called jointly asymptotically normal deterministic sequence kan zero mean unit variance satisfied easy modify definition normalizing random variables definition joint asymptotic normality rpn jointly asymptotically normal sequence rpn atn ewn atn cov definition asymptotic normality strong appealing shown hold moderate regime huber fact huber shows jointly asymtotically normal max provided full rank max words moderate regime asymptotic normality hold linear contrasts even case applications however usually necessary consider linear contrasts instead small subset coordinates low dimensional linear contrasts naturally modify definition adapt needs imposing constraints popular concept use section informally called asymptotic normality defined restricting canonical basis vectors one element equivalent definition stated follows definition asymptotic normal rpn asymptotically normal sequence ewn var convenient way define asymptotic normality introduce metric kolmogorov distance total variation distance induces weak convergence topology asymptotically normal ewn max var discussion inference technical assumptions variance bias estimation complete inference need compute bias variance discussed remark unbiased loss function error distribution symmetric variance easy get conservative estimate via resampling methods jackknife consequence inequality see karoui karoui purdom details moreover variance decomposition formula var var var var unconditional variance random design matrix conservative estimate unconditional variance calculated solving system see karoui donoho montanari however estimating exact variance known hard karoui purdom show existing resampling schemes including jacknife residual bootstrap either conservative large challenge mentioned karoui karoui purdom due fact residuals mimic behavior resampling methods effectively modifies geometry dataset point view statistics interest believe variance estimation moderate regime rely different methodologies ones used estimation technical assumptions hand assume strongly convex one remedy would adding ridge regularized term karoui new problem amenable analysis method used article however regularization term introduces bias hard derived variance unregularized mestimators strong convexity also assumed works karoui donoho montanari however believe assumption unnecessary removed least design matrices another possibility errors moments add small quadratic term loss function small finally recall many situations actually efficient see numerical work bean moderate dimensions instance case errors greater working strongly convex loss functions problematic regression would setting explore traditional robustness questions need weaken requirements assumption requires substantial work extension main results chatterjee technical part paper already long leave interesting statistical question future works proof sketch since proof theorem somewhat technical illustrate main idea section first notice implicit function independent random variables determined hessian matrix loss function notation introduced section assumption implies loss function strongly convex case unique seen function powerful central limit theorem type statistics inequality sopi developed chatterjee used central limit theorems linear spectral statistics large random matrices recall one main results convenience reader proposition sopi chatterjee let take let denote partial derivative gradient hessian let finite fourth moment dtv var var hard compute gradient hessian respect recalling definitions equation lemma suppose etj diag etj cononical basis vectors recalling definitions assumption bound follows lemma let defined proposition setting let eketj consequence inequality bound total variation distance normal distribution var precisely prove following lemma lemma assumptions max dtv var maxj minj var polylog lemma key prove theorem obtain asymptotic normality left establish upper bound lower bound var fact prove lemma assumptions polylog min var max polylog lemma lemma together imply polylog max dtv var appendix provides roadmap proof lemma special case design matrix one realization random matrix entries also serves outline rigorous proof appendix comment inequality notice linear function inequality esseen implies var sup hand inequality implies dtv var var slightly worse bound requires stronger conditions distributions variates provides bounds metric instead kolmogorov metric comparison shows inequality regarded generalization bound transformations independent random variables estimator estimator special case estimator written explicitly analysis properties extremely simple understood several decades see arguments huber lemma huber proposition case hat matrix captures problems associated dimensionality problem particular proving asymptotic normality simply requires application theorem however somewhat helpful compare conditions required asymptotic normality simple case ones required general setup theorem briefly section asymptotic normality lse linear model full rank thus coordinate linear contrast zero mean instead assumption requires need assume maxi bound data esseen implies kej kej kej ketj var motivates define matrix specific quantity ketj ketj bound implies determines asymptotic normality theorem maxi max max var absolute constant kolmogorov distance defined sup turns plays setting role assumption since known condition like necessary asymptotic normality estimators huber proposition shows particular assumption variant also needed general case see appendix details discussion naturally checking conditions asymptotic normality much easier leastsquares case general case consideration paper particular asymptotic normality conditions checked broader class random design matrices see appendix details orthogonal design matrices cid kxjj hence condition true entry dominates row counterexample gave section still provides counterexample reason different namely sum finitely many independent random variables evidently general fact case bounded away inferential questions also extremely simple context essentially dimensionindependent reasons highlighted theorem naturally reads estimating still simple minimal conditions provided see bickel freedman theorem standard computations concerning normalized residual using variance computations latter may require moments replace xtk construct confidence intervals based tend normalized residual sum squares evidently consistent even case gaussian errors requirement may dispensed numerical results seen previous sections related papers five important factors affect distribution design matrix error distribution sample size ratio loss function aim section assess quality agreement asymptotic theoretical results theorem empirical properties also perform simulations assumptions theorem violated get intuitive sense whether assumptions appear necessary whether simply technical artifacts associated method proof developed numerical experiments report section seen complement theorem rather simple check practical relevance design matrices consider one realization random design matrices following three types design xij elliptical design xij addition independent partial hadamard design matrix formed random set columns hadamard matrix matrix whose columns orthogonal entries restricted consider two candidates design elliptical design standard normal distribution two degrees freedom denoted error distribution assume entries one two distributions namely violates assumption evaluate finite sample performance consider sample sizes section consider huber loss huber default yields relative efficiency gaussian errors problems also carried numerical work see appendix details asymptotic normality single coordinate first simulate finite sample distribution first coordinate combination sample size type design elliptical hadamard entry distribution normal error distribution normal run simulations consisting following steps step generate one design matrix step generate error vectors step regress design matrix end random samples denoted step estimate standard deviation sample standard error step construct confidence interval step calculate empirical coverage proportion confidence intervals cover true finally display boxplots empirical coverages case figure worth mentioning theories cover two cases design normal entries normal errors orange bars first row first column see proposition elliptical design normal factors normal errors orange bars second row first column see proposition first discuss case case two samples per parameter nonetheless observe coverage quite close even sample size small cases covered theories cases interesting see coverage valid stable partial hadamard design case sensitive distribution multiplicative factor elliptical design case even error distribution designs coverage still valid stable entry normal contrast entry distribution coverage large variation small samples average coverage still close normal design case slightly lower design case summary finite sample distribution sensitive entry distribution error distribution indicates assumptions design matrix artifacts proof quite essential conclusion drawn case except variation becomes larger cases sample size small however worth pointing even case samples per parameter sample distribution well approximated normal distribution moderate sample size contrast classical rule thumb suggests samples needed per parameter asymptotic normality multiple marginals since theory holds general worth checking approximation multiple coordinates finite samples illustration consider coordinates namely simultaneously calculate minimum empirical coverage avoid finite sample dependence coordinates involved simulation estimate empirical coverage independently coordinate specifically run simulations consisting following steps step generate one design matrix step generate error vectors step regress design matrix end random samples using response vector step estimate standard deviation sample standard error step construct confidence interval coverage normal coverage normal ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal sample size hadamard entry dist normal hadamard figure empirical coverage left right using loss corresponds sample size ranging corresponds empirical coverage column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design step calculate empirical coverage proportion confidence intervals cover true denoted step report minimum coverage assumptions satisfied also close result theorem thus measure approximation accuracy multiple marginals figure displays boxplots quantity scenarios last subsection two cases theories cover minimum coverage increasingly closer true level similar last subsection approximation accurate partial hadamard design case insensitive distribution multiplicative factors elliptical design case however approximation inaccurate design case shows evidence technical assumptions artifacts proof hand figure suggests using conservative variance estimator jackknife estimator corrections confidence level order make simultaneous inference multiple coordinates investigate validity bonferroni correction modifying step step confidence interval bonferroni correction obtained quantile standard normal distribution proportion least marginals close normal distribution modify confidence intervals step calculate proportion step figure displays boxplots coverage clear bonferroni correction gives valid coverage except error distribution min coverage normal min coverage normal ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal sample size hadamard entry dist normal hadamard figure mininum empirical coverage left right using loss corresponds sample size ranging corresponds minimum empirical coverage column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design conclusion proved asymptotic normality regression asymptotic regime fixed design matrices appropriate technical assumptions design assumptions satisfied high probability broad class random designs main novel ingredient proof use inequality numerical experiments confirm complement theoretical results bonf coverage normal bonf coverage normal iid iid hadamard hadamard ellip ellip coverage coverage sample size entry dist normal sample size hadamard entry dist normal hadamard figure empirical coverage bonferroni correction left right using loss corresponds sample size ranging corresponds empirical uniform coverage bonferroni correction column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design references anderson introduction multivariate statistical analysis wiley new york bai silverstein spectral analysis large dimensional random matrices vol springer bai yin limit smallest eigenvalue large dimensional sample covariance matrix annals probability baranchik inadmissibility maximum likelihood estimators multiple regression problems three independent variables annals statistics bean bickel karoui lim penalized robust regression technical report department statistics berkeley bean bickel karoui optimal highdimensional regression proceedings national academy sciences bickel doksum mathematical statistics basic ideas selected topics volume vol crc press bickel freedman asymptotic theory bootstrap annals statistics bickel freedman bootstrapping regression models many parameters festschrift erich lehmann chatterjee fluctuations eigenvalues second order inequalities probability theory related fields chernoff note inequality involving normal distribution annals probability cizek weron statistical tools finance insurance springer science business media cochran sampling techniques john wiley sons david nagaraja order statistics wiley online library donoho montanari high dimensional robust asymptotic variance via approximate message passing probability theory related fields durrett probability theory examples cambridge university press efron efron jackknife bootstrap resampling plans vol siam karoui concentration measure spectra random matrices applications correlation matrices elliptical distributions beyond annals applied probability karoui effects markowitz problem quadratic programs linear constraints risk underestimation annals statistics karoui asymptotic behavior unregularized robust regression estimators rigorous results arxiv preprint karoui impact predictor geometry performance highdimensional generalized robust regression estimators technical report department statistics berkeley karoui bean bickel lim robust regression predictors technical report department statistics berkeley karoui bean bickel lim robust regression predictors proceedings national academy sciences karoui purdom trust bootstrap technical report department statistics berkeley esseen fourier analysis distribution functions mathematical study law acta mathematica geman limit theorem norm random matrices annals probability hanson wright bound tail probabilities quadratic forms independent random variables annals mathematical statistics horn johnson matrix analysis cambridge university press huber robust estimation location parameter annals mathematical statistics huber wald lecture robust statistics review annals mathematical statistics huber robust regression asymptotics conjectures monte carlo annals statistics huber robust statistics john wiley sons new york huber robust statistics springer johnstone distribution largest eigenvalue principal components analysis annals statistics klebanov inadmissibility robust estimators respect norm lecture series latala estimates norms random matrices proceedings american mathematical society ledoux concentration measure phenomenon american mathematical soc litvak pajor rudelson smallest singular value random matrices geometry random polytopes advances mathematics mallows note asymptotic joint normality annals mathematical statistics mammen asymptotics increasing dimension robust regression applications bootstrap annals statistics pastur distribution eigenvalues sets random matrices mathematics muirhead aspects multivariate statistical theory vol john wiley sons portnoy asymptotic behavior regression parameters large consistency annals statistics portnoy asymptotic behavior estimators regression parameters large normal approximation annals statistics portnoy central limit theorem probability theory related fields portnoy central limit theorem applicable robust regression estimators journal multivariate analysis posekany felsenstein sykacek biological assessment robust noise models microarray data analysis bioinformatics relles robust regression modified tech dtic document rosenthal subspaces ofl spanned sequences independent random variables israel journal mathematics rudelson vershynin smallest singular value random rectangular matrix communications pure applied mathematics rudelson vershynin theory random matrices extreme singular values arxiv preprint rudelson vershynin inequality concentration electron commun probab scheffe analysis variance vol john wiley sons silverstein smallest eigenvalue large dimensional wishart matrix annals probability stone choice assessment statistical predictions journal royal statistical society series methodological tyler multivariate scatter annals statistics van der vaart asymptotic statistics cambridge university press vershynin introduction analysis random matrices arxiv preprint wachter probability plotting points principal components ninth interface symposium computer science statistics wachter strong limits random matrix spectra sample matrices independent elements annals probability wasserman roeder high dimensional variable selection annals statistics yohai robust estimates general linear model universidad nacional plata departamento matematica yohai maronna asymptotic behavior linear model annals statistics appendix proof sketch lemma appendix provide roadmap proving lemma considering special case one realization random matrix entries random matrix theory geman silverstein bai yin implies thus assumption satisfied high probability thus lemma holds high probability remains prove following lemma obtain theorem lemma let random matrix entries one realization assumptions polylog min var max polylog defined randomness comes upper bound first proposition rest proof symbol var denotes expectation variance conditional let eketj let block matrix inversion formula see proposition state proposition appendix implies since obtain bound similarly ekzjt ekzjt vector numerator linear contrast subgaussian entries fixed matrix denote column atk kak see section vershynin detailed discussion hence definition therefore simple union bound conclude kat max kak let log kat max kak log entails max kak polylog kakop polylog high probability coefficient matrix depends hence use directly however dependence removed replacing since depend since entries column highly influential words estimator change drastically removing column would suggest proved karoui polylog sup rigorously proved kzjt kzjt polylog see appendix details since independent kop kop polylog follows kzjt polylog summary polylog lower bound var approximating var var shown karoui karoui considers ridge regularized estimator different setting however argument still holds case proved appendix zij shown karoui max polylog thus var var refined calculation appendix shows polylog var var left show var polylog bounding var via var definition var polylog var polylog shown appendix var polylog result var var var previous paper karoui rewrite middle matrix idempotent hence positive thus obtain polylog var var polylog left show var polylog bounding var via recall definition see section var notice independent hence conditional distribution given remains marginal distribution since entries inequality hanson wright rudelson vershynin see proposition shown proposition implies quadratic form denoted zjt concentrated mean zjt ezj zjt consequence left show polylog lower bound definition var lower bounded variance recall random variable var independent copy suppose function implies var var words entails var lower bound var provided derivative bounded away application see var var hence var variance decomposition formula var var var var includes entry given function using var inf var inf var implies var var inf min var summing var obtain var inf min var shown appendix assumptions inf polylog proves result min var polylog proof theorem notation summarize notations subsection model considered design matrix random vector independent entries notice target quantity shift invariant assume without var loss generality provided full column rank see section details let xti denote row denote column throughout paper denote xij entry design matrix removing row design matrix removing column design matrix removing row column vector removing entry associated loss function defined xtk arg min similarly define version xtk arg min based notation define full residual xtk residual xtk diag diag four diagonal matrices defined diag diag define let denote indices coefficients interest say min max regarding technical assumptions need following quantities largest resp smallest eigenvalue matrix canonical basis vector let finally let max max max cov adopt landau notation addition say similarly say simplify logarithm factors use symbol polylog denote factor upper bounded denote factor log similarly use polylog lower bounded log finally restate technical assumptions exists polylog polylog smooth functions polylog moreover assume mini var polylog polylog xjt polylog polylog polylog deterministic approximation results appendix use several approximations random designs prove follow strategy karoui establishes deterministic results apply concentration inequalities obtain high probability bounds note solution xti need following key lemma bound calculated explicily lemma karoui proposition proof mean value theorem exists xti xti xti xti xti xti xti based lemma derive deterministic results informally stated appendix results shown karoui derive refined version unpenalized throughout subsection assume assumption implies following lemma lemma assumption state result define following quantities max max kxi max kxj following proposition summarizes deterministic results need proof proposition assumption norm estimator bounded define xij max iii difference bounded max difference full residual bounded max max proof lemma since zero definition implies first prove since diagonal entries lower bounded conclude note schur complement horn johnson chapter etj implies xjt xjt second term bounded definition see first term assumption implies use fact sign sign recall definition obtain since minimizer loss function xti holds putting together pieces conclude definition iii proof result almost karoui state sake completeness let subscript denotes entry subscript denotes subvector formed entry furthermore define rewrite definition hence xti xti mean value theorem exists xti xti xti xti xti xti xti xti xij xti xij let plug result obtain xti xij xti xij xti xij xti xti calculate entry note xij xti xij xij xti xij xij xij xti xij xij xij xij xij xij second last line uses definition putting results together obtain entails max derive bound maxi defined lemma xti definition xti xti xij last inequality derived definition see since column matrix norm upper bounded operator norm matrix notice middle matrix rhs displayed atom orthogonal projection matrix hence kop kop kop therefore max max kop thus max xjt xjt recall definition xjt xjt xjt result xjt kxj defined therefore putting part together obtain lemma since entry similar part iii result shown karoui state refined version sake completeness let defined xti xti xti kxi xti note kxi part iii kxi hand similar therefore summary approximation results technical assumptions derive rate approximations via proposition justifies approximations appendix theorem assumptions polylog max polylog iii max polylog max polylog max max polylog proof notice xej canonical basis vector kxj etj similarly consider instead conclude kxi recall definition conclude polylog since gaussian concentration property ledoux chapter implies hence finite lemma hence finite part proposition using convexity hence recall xti kxi kxi kxi others since zero mean consequence kxi kxi kxi kxi kxi using convexity hence max inequality max recall kxi thus max max polylog hand let polylog hence definition polylog summary polylog iii theorem exists assumption lemma defined lemma result recall definition convexity polylog assumption inequality polylog assumptions polylog putting pieces together obtain polylog max similarly holder inequality polylog assumptions polylog therefore max polylog follows previous part polylog assumptions multiplicative factors also polylog polylog polylog therefore max max polylog controlling gradient hessian proof lemma recall solution following equation xti taking derivative establishes establishes note rewritten fix note xti xti recall eti gek canonical basis result diag gek taking derivative diag gek defined etj diag gek etk diag etj use fact diag diag vectors implies diag etj proof lemma throughout proof using simple fact based found etj etj etj etj thus recall etj etj etj etj emphasize use naive bound etj etj etj polylog since fails guarantee convergence distance address issue deriving lemma contrast proved polylog etj thus produces slightly tighter bound etj olm polylog turns bound suffices prove convergence although implies possibility sharpen bound using refined analysis explore avoid extra conditions notation bound first derive bound definition lemma etj hand follows etj etj putting two bounds together bound obtain bound finally derive bound lemma involves operator norm symmetric matrix form diagonal matrix triangle inequality kop kop kgkop note projection matrix idempotent implies write kgkop returning obtain diag etj kgkop etj etj etj assumption implies hence therefore etj etj proof lemma theorem using inequality proposition max var var polylog var var polylog follows polylog bound simplified max polylog var var remark use naive bound repeating derivation case obtain worse bound polylog polylog max var polylog var however prove var without numerator shown polylog next subsection convergence proved upper bound mentioned appendix approximate remove functional dependence achieve introduce two terms ketj defined ketj first prove negligible derive upper bound controlling lemma max theorem polylog erj bound via fact algebra follows ketj ketj ketj etj lemma thus entails etj etj polylog bound first prove useful lemma lemma symmetric matrix kop kop proof first notice therefore since kop positive kop therefore kop back bounding let lemma max hence kbj kop theorem polylog using fact obtain ketj ketj etj etj inner matrix rewritten let knj kop kaj kop kbj kop kaj kop event knj kop lemma together entails etj etj etj kaj kop since kbj kop kop kaj kop kbj kop thus etj polylog event since together markov inequality implies htat etj polylog putting pieces together conclude etj etj polylog bound similar block matrix inversion formula see proposition etj xjt xjt recall xjt numerator recalling definition obtain max max kxjt proved max entails kxjt polylog putting pieces together conclude ekxjt polylog summary based results section section polylog note bounds obtained depend conclude polylog max lower bound var approximating var var theorem max polylog max polylog using fact bound difference polylog similarly since polylog ebj putting two results together conclude polylog var var left show var polylog controlling var var recall xij var enj enj using fact var enj enj controlling assumption implies cov var npolylog left show cov polylog since result also used later appendix state following lemma lemma assumptions cov min var polylog proof implies var var note function apply obtain lower bound var fact variance decomposition formula using independence var var var var includes entry apply var inf var hence var var inf var compute similar eti defined eti eti eti definition let denote matrix removing row block matrix inversion formula see proposition eti implies eti eti eti eti eti eti eti apply argument eti eti thus var eti summing obtain cov min var min var min var assumption conclude since mini var polylog cov polylog summary var polylog recall xjt conclude var polylog controlling definition enj enj enj var enj enj var cov enj var var var var proof theorem polylog last equality uses fact polylog proved hand let independent copy var since shown var var bound var propose using standard inequality chernoff stated follows proposition let twice differentiable function var case hence twice differentiable function var max applying var given using chain rule fact dbb square matrix obtain defined last subsection implies xjt entails var similar recalling definition first compute diag diag let denotes hadamard product xjt xjt xjt diag use fact vectors diag together imply var note lemma therefore obtain var kxj kxj shown kop hand notice row see definition definition kxj max max assumption kxj entails polylog var polylog combining obtain polylog summary putting together conclude polylog var var polylog polylog polylog combining var polylog proof results proofs propositions section proof proposition let first prove conditions imply unique minimizer fact since using fact even result unique minimizer xti xti xti equality holds iff xti since unique minimizer implies since full column rank conclude proof proposition let xti since minimizes holds xti note unique minimizer equality holds since full column rank must hold proofs corollary proposition suppose function unique minimizer assume xjnc contains intercept term xjn full column rank span span jnc let arg min min proof let xti minimizer might unique prove follows argument proposition xti xjn jnc since xjnc contains intercept term xjn span jnc follows xjn since xjn full column rank conclude proposition implies identifiable even full column rank similar conclusion holds estimator residuals following two propositions show certain assumptions invariant choice presense multiple minimizers proposition suppose convex twice differentiable let minimizer might unique xti independent choice proof conclusion obvious unique minimizer otherwise let two different minimizers denote difference since convex minimizer taylor expansion since minimizers letting tend conclude hessian written diag xti thus satisfies implies hence cases proposition suppose convex twice differentiable assume xjn full column rank span span jnc let minimizer might unique xti independent choice proof proof proposition conclude minimizers decompose term two parts xjn span jnc follows xjn since xjn full column rank conclude hence proof corollary assumption xjn must full column rank otherwise exists xjn case xjtn xjn violates assumption hand also guarantees span span jnc together assumption proposition implies independent choice let assume invertible let xjn xjnc xjnc rank rank model rewritten let might unique based proposition shows independent choice invariance argument shows rest proof use denote quantity obtained based first show assumption affected transformation fact definition span span hence residuals changed proposition implies recall definition condition entails particular xjtnc implies cov xjtnc xjnc thus xjt xjcn xjnc prove assumption also affected transformation argument shown hand let let denote matrix removing row column also recall definition hand definition thus xjcn summary putting pieces together theorem max dtv var provided satisfies assumption let singular value decomposition xjnc diag diagonal matrix formed singular values xjnc first consider case xjnc full column rank let xjtn xjn xjtn xjn xjtn xjnc xjtnc xjnc xjnc xjn implies max assumption implies polylog min polylog theorem conclude next consider case xjcn full column rank first remove redundant columns xjcn replace xjnc matrix formed maximum linear independent subset denote matrix span span span span consequence proposition neither affected thus reasoning applies case proofs results section first prove two lemmas regarding behavior lemmas needed justifying assumption examples lemma assumptions kqj kop kqj cov defined section proof lemma definition sup unit sphere given cov var shown appendix eti yields eti standard inequality see proposition since max var conclude lemma appendix kop therefore sup var hence lemma assumptions polylog mini var proof direct consequence lemma throughout following proofs use several results random matrix theory bound largest smallest singular values results shown appendix furthermore contrast sections notation var denotes probability expectation variance respect section proof proposition proposition thus assumption holds high probability inequality hanson wright rudelson vershynin see proposition given deterministic matrix azj ezjt azj exp min kakop universal constant let conditioning lemma know kqj kqj kop hence exp min note zjt zjt lemma conclude zjt exp min let take expectation sides obtain zjt exp min hence zjt min exp min entails min zjt polylog thus assumption also satisfied high probability hand since entries deterministic unit vector hence let since independent union bound gives log log fubini formula durrett lemma log log log log log log log polylog polylog together markov inequality guarantees assumption also satisfied high probability proof proposition left prove assumption holds high probability proof assumption exactly proof proposition proposition hand proposition litvak thus proof proposition since excludes intercept term proof assumption still proposition left prove assumption let rademacher random variables diag left show assumption holds high probability note borel sets last two lines uses symmetry conclude independent entries since rows independent independent entries since symmetric unit variance also symmetric variance bounded satisfies conditions propsition hence assumption satisfied high probability proof proposition proposition special case let standard gaussian entries proposition satisfies assumption high probability thus polylog polylog assumption first step calculate zjt let vec consequence thus easy see min max shown hence zjt let zjt zjt lemma kop kqj kop hence inequality hanson wright rudelson vershynin see proposition obtain similar inequality follows zjt exp min hand zjt zjt definition lemma similar obtain zjt exp min let zjt exp min union bound together yields min zjt min polylog polylog assumption let note using argument obtain max polylog polylog markov inequality polylog proof proposition proof assumptions hold high probability exactly proof proposition left prove tion see corollary let mini recall definition rewrite obvious span span consequence remains prove polylog polylog prove let left show polylog polylog definition mini maxi polylog since standard gaussian entries proposition moreover maxi polylog thus polylog hand similar proposition diag rademacher random variables argument proof proposition implies independent entries norm bounded variance lower bounded proposition satisfies assumption high probability therefore holds high probability proof proposition let matrix entries zij proposition proposition zij satisfies assumption high probability notice polylog polylog thus satisfies assumption high probability conditioning realization law zij change due independence repeating arguments proof proposition proposition show zjt polylog max polylog zjt zjt zjt polylog max max max max max polylog markov inequality assumption satisfied high probability proof proposition concentration inequality plus union bound imply max log log thus high probability log polylog let subset size proposition proposition conditions proposition proposition exists constants depend represents formed row union bound stirling formula exists constant exp log log sufficiently small sufficiently large log log hence zit lemma lim inf min zit hand since continuous largest let set indices corresponding largest probability lim inf lim inf lim inf lim inf lim inf lim inf min prove assumption similar proof proposition left show min polylog furthermore lemma remains prove min polylog recalling equation proof lemma eti proposition zjt hand apply min union bound indicates probability min min min max implies min min min moreover discussed log min almost surely thus follows high probability eti eti log bound holds diagonal elements uniformly high probability therefore log polylog result assumption satisfied high probability finally obtain max cauchy inequality max max similar conclude polylog markov inequality assumption satisfied high probability results section relation section give sufficient almost necessary condition coordinatewise asymptotic normality estimator see theorem subsubsection show generalization general mestimators consider matrix obtain using general loss functions block matrix inversion formula see proposition use approximation result holds ketj kej recall row max ketj ketj side equals case therefore although complicated form assumption artifact proof essential asymptotic normality additional examples benefit analytical form estimator depart subgaussinity entries following proposition shows random design matrix entries appropriate moment conditions satisfies high probability implies one realization conditions theorem satisfied high probability proposition zij independent random variables var zij full column rank ezj span jnc almost surely column max typical practically interesting example contains intercept term entries continuous distribution sufficiently many moments case first three conditions easily checked ezj multiple belongs span jnc fact condition allows proposition cover general cases one example census study fix effect might added model zit represents state subject case contains formed anova forms mentioned example latter usually incorporated adjusting group bias target inference condition satisfied zij mean group ezij proof proposition formula etj zjt zjt projection matrix generated ketj kzjt ketj zjt similar proofs examples strategy show numerator linear contrast denominator quadratic form concentrated around means specifically show exists constants max sup kazj zjt azj holds since independent assumptions zjt sup kazj zjt azj max kazj sup zjt azj thus probability max hence max prove proof although looks messy essentially proof examples instead relying exponential concentration given show concentration terms moments fact idempotent sum square row bounded since jensen inequality ezij rosenthal inequality rosenthal exists universal constant aij zij ezij aij ezij aij aij let given markov inequality aij zij union bound implies kazj derive bound zjt azj since exists ezjt azj aii ezij bound tail probability need following result lemma bai silverstein lemma let nonrandom matrix random vector independent entries assume ewi constant depending easy extend lemma case rescaling fact denote variance let diag cov let entails hand thus obtain following result lemma let nonrandom matrix random vector independent entries suppose constant depending apply lemma obtain azj ezjt azj aat aat constant since idempotent eigenvalues either thus aat implies aat aat hence azj ezjt azj constant depends markov inequality azj ezjt azj combining conclude zjt azj notice depend therefore proved hence proposition additional numerical experiments section repeat experiments section using loss smooth satisfy technical conditions results displayed seen performance quite similar huber loss coverage normal coverage normal ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal sample size hadamard entry dist normal hadamard figure empirical coverage left right using loss corresponds sample size ranging corresponds empirical coverage column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design min coverage normal min coverage normal ellip coverage ellip coverage iid iid hadamard hadamard sample size entry dist normal sample size hadamard entry dist normal hadamard figure mininum empirical coverage left right using loss corresponds sample size ranging corresponds minimum empirical coverage column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design bonf coverage normal bonf coverage normal iid iid hadamard hadamard ellip ellip coverage coverage sample size entry dist normal sample size hadamard entry dist normal hadamard figure empirical coverage bonferroni correction left right using loss corresponds sample size ranging corresponds empirical uniform coverage bonferroni correction column represents error distribution row represents type design orange solid bar corresponds case normal blue dotted bar corresponds case red dashed bar represents hadamard design miscellaneous appendix state several technical results sake completeness proposition horn johnson formula let invertible matrix write block matrix invertible matrices schur complement proposition rudelson vershynin improved version original form hanson wright let random vector independent components every exp min kakop proposition bai yin zij random variables zero mean unit variance finite fourth moment proposition latala suppose zij independent random variables finite fourth moment max ezij ezij ezij universal constant particular ezij uniformly bounded proposition rudelson vershynin suppose zij independent random variables exists universal constant proposition rudelson vershynin suppose zij random variables zero mean unit variance universal constants proposition litvak suppose zij independent random variables zij var zij exists constants depends | 10 |
dec classification kleinian groups hausdorff dimensions one yong institute advanced study princeton university abstract paper provide complete classification kleinian group hausdorff dimensions less particular prove every purely loxodromic kleinian groups hausdorff dimension classical schottky group upper bound sharp application result implies every closed riemann surface uniformizable classical schottky group proof relies result hou space rectifiable closed curves introduction main theorem take kleinian groups finitely generated discrete subgroups psl main theorem theorem classification purely loxodromic kleinian group limit set hausdorff dimension classical schottky group bound sharp supported ambrose monell fundation note selberg lemma really restriction since finitely generated discrete subgroup psl finite index subgroup application following corollary resolution folklore problem bers classical schottky group uniformization closed riemann surface corollary follows work hou theorem hou every closed riemann surface uniformizable schottky group hausdorff dimension every point moduli space hausdorff dimension fiber schottky space corollary uniformization every closed riemann surface uniformized classical schottky group strategy proof first let recall result hou theorem hou exists kleinian group limit set hausdorff dimension classical schottky group define sup theorem maximal parameter schottky group hausdorff dimension classical schottky group hence theorem rephrased prove contradiction throughout paper assume show maximal recall hausdorff dimension function schottky space rank real analytic consequence theorem rankg schottky groups hausdorff dimension see section dimensional open connected submanifold schottky space proof done follows first note must contain schottky group otherwise maximal definition see proposition second show every element boundary either classical schottky group schottky group lemma contradicts first fact hence must bulk paper devoted proof second fact summarize idea following result bowen schottky group hausdorff dimension exist rectifiable closed curve let space bounded length closed curves intersects compact set equipped metric complete space see section show every bounded length limit sequence also show schottky group every open neighborhood relative topology see section every element open neighborhood also define linearity transversality invariant show classical schottky groups preserve invariants nonclassical schottky groups transverse linear given schottky group show exists open neighborhood relative topology space rectifiable curves respect frechet metric every point open neighborhood see lemma next assume sequence classical schottky groups schottky group hasudorff dimensions less one study singularity formations classical fundamental domains singularities three types tangent degenerate collapsing show singularities imply exists every open neighborhood quasicircle contain points essentially existence singularity obstruction existence open neighborhood see lemma hence follows results classical hausdorff dimensions less one must classical schottky group acknowledgement work made possible unwavering supports insightful conversations peter sarnak greatly indebted groundbreaking works peter sarnak guided author study problem first place wish express deepest gratitude sincere appreciation dave gabai continuous amazing supports encouragements allowed complete work want express sincere appreciation referee detailed reading helpful comments suggestions also want express sincere appreciation ian agol matthew reading previous draft paper dedicated father shuying hou generating jordan curves schottky group rank defined discrete faithful representation free group psl follows freely generated purely loxodromic elements implies find collection open topological disks disjoint closure riemann sphere boundary curves definition closed jordan curves riemann sphere whenever exists set generators circles called classical schottky group classical generators schottky space defined space rank schottky groups conjugacy psl normalization chart complex parameters hence dimensional complex manifold bihomolomorphic auto group isomorphic quotient group denote set elements classical schottky groups note open hand nontrivial result due marden subset however follows theorem dimensional open connected submanifold denotes space schottky groups hausdorff dimension notations given kleinian group denote limit set region discontinuity hausdorff dimension respectively throughout paper given fundamental domain denote orbit actions also say classical fundamental domain classical schottky group disjoint circles definition given geometrically finite kleinian group closed jordan curve contains limit set called remark make global assumption throughout paper schottky groups hausdorff dimension stated otherwise next give construction generalization construction bowen let fundamental domain collection disjoint jordan curves comprising let denote collection arcs connecting points arcs connects set disjoint curves connecting disjoint points collection jordan curves figure figure defines closed curve containing defines obviously infinitely many different gives different note simply connected regions gives bers simultaneous uniformization riemann surface definition generating curve given say collection disjoint curves generating curve generated note constructed requires imagine element subset collection defined fact generalization also used construction schottky groups proposition every generated generating curves proof let let fundamental domain set consists collection disjoint curves intersects along hence since hence generating curve definition linear call linear consists points circular arcs lines note linear exists circular arcs lines say arc orthogonal tangents intersections orthogonal arc parallel definition given linear linear arcs intersect say definition transverse given say transverse intersects orthogonally parallel arc otherwise say definition parallel given say parallel exists arc proposition transverse always exists given schottky group proof let bounded distinct jordan closed curves take curve connecting intersects orthogonally noted general necessarily rectifiable instance take generating curves recall curve said rectifiable hausdorff measure curve finite obstruction rectifiability fact following result bowen theorem given schottky group hausdorff dimension limit set exists rectifiable proof theorem relies fact poincare series converges proposition let schottky group suppose given generating curve rectifiable curve rectifiable proof let hausdorff measure since let let denotes derivative also since poincare series satisfies implies rectifiable rectifiable let compact set denote space closed curves bounded length intersect continuous rectifiable map let respective arclength distance defined inf sup homeo given compact space closed curves bounded length metric space respect two curves exists parametrization topology defined respect metric see let generating curve fix indexing set let parametrization define parametrization inf implies continuity respect generating curve proposition given compact space complete metric space respect proof let cauchy sequence rectifiable curves bounded length exists lipschitz parameterizations bounded lipschitz constants uniformly lipschitz completeness follows fact curves contained within large compact subset proposition exists closure classical schottky group proof suppose false every element hausdorff dimension classical schottky group since classical schottky space open schottky space open neighborhood definition maximal hence nonclassical schottky groups hausdorff dimension arbitrarily close exists sequence schottky groups hausdorff dimensions let limn since assumption schottky groups hausdorff dimension classical must either schottky group classical schottky group implies large must contradiction sequence schottky groups hence maximal proposition exists schottky group sequence classical schottky groups sup proof follows hausdorff dimension map real analytic map proposition exists theorem implies open submanifold hence exists sequence classical schottkys schottky space rectifiable curves take given proposition particular sequence classical schottky groups denote space quasiconformal maps follows quasiconformal deformation theory schottky space classical schottky group hausdorff dimension write psl remark throughout rest paper fix classical schottky group hausdorff dimension notations set collection schottky groups hausdorff dimension note exists sequence quasiconformal maps write write given kleinian group quasiconformal map schottky space also considered subspace provides analytic structure complex analytic manifold proposition let sequence schottky groups schottky group let fundamental domain exists sequence fundamental domain proof let jordan curves boundary set boundary fundamental domain hence fundamental domain defined lemma suppose every bounded length compact proof let note since limit set limit set compact rectifiable find compact set given let denote collection bounded length define closure set bounded length proposition subspace compact proof curves bounded length parametrization bounded lipschitz constants follows theorem uniform convergence topology since curves contained large compact set closed bounded hence compact define open sets relative topology given open set let schottky group let fundamental domain suppose every element let denote generating curve respect collection generating curves open set gives open set generating curves set collection generating curves elements define topology sometime denote curve limit rectifiable proposition let every bounded length addition linear linear proof let note since define sequence jordan closed curves follows proposition generating curve since hence generating curve denote generating curve since rectifiable curve modify necessary assume rectifiable curves let since hausdorff measure derivative hence rectifiable follows exists large hence bounded proposition finally linear linear since mobius maps preserves linearity linear definition given sequence quasi circles say convergent sequence also call non transverse also non transverse lemma existence let sequence schottky groups exists addition also non transverse non transverse proof let maps let non transverse property obviously preserved corollary let linear linear proof linearity obviously preserved corollary let quasicircles non transverse linear non transverse proof lemma open let let schottky group let exists open neighboredood every elements proof let fundamental domain let generating curve respect let fundamental domain proposition let open set generating curve set open sets generating curves figure let open set generating curves since large choose sufficiently small neighborhood open neighborhood generating curves let generated assuming sufficiently small neighborhood length small large generated since bounded length defines open set elements corollary let schottky group let exists open neighborhood every element figure open set generating curves proof follows lemma lemma next analyze formations singularities given sequence classical schottky groups converging schottky group types singularities studied lemma singularity let schottky group assume schottky group exists every contains say closed curve contains singularity point proof let classical fundamental domain given sequence classical fundamental domains convergence consider follows collection circles riemann sphere pass subsequence necessary limn either point circle say convergents region boundary consists limn necessarily either point circle note necessarily fundamental domain necessarily connected let lim assumption classical schottky group classical fundamental domain consists circles points however circles may disjoint precisely following possible degeneration circles gives least one following singularities types tangency contains tangent circles degeneration contains circles degenerates point collapsing contains two circles collapses one circle two concentric circles centered origin rest circles squeezed two two collapse single circle consider contains tangency let tangency point let pass point assume lemma false sufficiently small open neighborhood contains follows proposition sequence let generating curves respect define follows note consists points linear arcs define generating curve consists linear arcs one end point point converges tangency addition also require arcs end points converges tangency point connect arc end points clear every contains generating curve property let generated set lim contains loop singularity point tangency figure hence since every contains must lemma true type singularity figure tangency singularity consider contains degeneration case must pass degeneration point note circles degenerates single point two possibilities degenerate points case two circles merge single degenerate point case circle degenerates point circle consider case two possible properties either point circle every must pass two separate arcs meet second possibility implies exists loop singularity hence jordan curve figure therefore first possibility follows proposition rectifiable limit sequence hence must pass however limit point hence passing contradiction figure degenerate singularities consider assume exists least two circles degenerates points otherwise third type collapsing singularity consider next circles degenerates point circle sequence quasicircles passes given generating curves curves connecting degenerating circle point converging linear arc intersecting orthogonally boundary neighboring converges loop singularity jordan curve figure finally note degeneration point limit point exists neighboring misses point hence contains curve limit sequence pass point gives consider contain collapsing let denote collapsed circle first suppose exists fixed point infinitely many elements fixed points let since figure degenerate singularities must either identical disjoint suppose fixed points elements contained infinitely many fixed points take three points fixed points elements sufficiently large must contained three distinct disk complement follows fact orbit images disks converging fixed points since distinct fixed points must disks contain one points since converges circle contained distinct disks bounded circles large implies must lies hence identical circles contradiction hence fixed points elements implies since schottky group must fuchsian group second kind classical schottky group contradiction proof theorem proof theorem proof contradiction suppose first note selberg lemma assume kleinian group note kleinian group must free show assume otherwise since purely loxodromic exists imbedded surface incompressible subgroup contradiction compressible cut along compression disks either end incompressible surface finitely many steps cutting obtain topological ball implies hence free let schottky group follows lemma exists open set every element schottky group lemma must every open set contains must contain gives contradiction hence must classical schottky group finally sharpness comes fact exists kleinian groups free hausdorff dimension equal one hence result yonghou references general relativity einstein equations oxford university press bowen hausdorff dimension button fuchsian schottky groups classical schottky groups geometry topology mono vol hou smooth moduli space riemann surfaces hou kleinian groups small hausdorff dimension classical schottky groups geometry topology hou finitely generated kleinian groups small hausdorff dimension classical schottky groups http phillips sarnak laplacian domains hyperbolic space limit sets kleinian groups acta | 4 |
aug technische keyword indexes string searching aleksander master thesis informatics department informatics technische keyword indexes string searching indizierung von volltexten und keywords textsuche author aleksander supervisor burkhard rost advisors szymon grabowski tatyana goldberg msc master thesis informatics department informatics august declaration authorship aleksander confirm master thesis work documented sources material used signed date abstract string searching consists locating substring longer text two strings approximately equal various similarity measures hamming distance exist strings defined broadly usually contain natural language biological data dna proteins also represent kinds data music images one solution string searching use online algorithms preprocess input text however often infeasible due massive sizes modern data sets alternatively one build index data structure aims speed string matching queries indexes divided ones operate whole input text answer arbitrary queries keyword indexes store dictionary individual words work present literature review index categories well contributions mostly first contribution index modification compressed index trades space speed approach count table occurrence lists store information selected addition individual characters two variants described namely one using bits space log log log average query time one linear space log log average query time input text length pattern length experimentally show significant speedup achieved operating albeit cost high space requirements hence name bloated category keyword indexes present split index efficiently solve problem especially error implementation language focused mostly data compaction beneficial search speed cache friendly compare solution algorithms show faster hamming distance used query times order microsecond reported one mismatch natural language dictionary minor contribution includes string sketches aim speed approximate string comparison cost additional space per string used context keyword indexes order deduce two strings differ least mismatches use fast bitwise operations rather explicit verification acknowledgements would like thank szymon grabowski constant support advice mentorship introduced academia would probably pursue scientific path vast knowledge ability explain things simply unmatched would like thank burkhard rost tatyana goldberg helpful remarks guidance field bioinformatics would like thank whole gank incoming dota team whole radogoszcz football airsoft pack ester keeps ego check duda whose decorum still spoiled edwin fung letting corporate giant consume fuchs still promises movie florentyna gust always supported difficult times cat korea jacek krasiukianis introducing realm street fighter games madaj biking tours jakub przybylski recently switched focus whisky beer remaining faithful interdisciplinary field malt engineering solving world problems together wojciech terepeta put face first sight morning indebted ozzy osbourne frank sinatra roger waters making world slightly interesting would like thank developers free yuml software http making life somewhat easier many thanks also goes family well giants lent shoulders contents declaration authorship abstract iii acknowledgements contents introduction applications natural language bioinformatics preliminaries sorting trees binary search tree trie hashing data structure comparison compression entropy pigeonhole principle overview string searching problem classification error metrics online searching exact approximate offline searching indexes contents exact suffix tree suffix array modifications structures transform operation efficiency flavors binary rank superlinear space linear space approximate blast keyword indexes exact bloom filter inverted index approximate problem permuterm index split index complexity compression parallelization inverted split index keyword selection minimizers string sketches experimental results split index string sketches conclusions future work data sets exact matching complexity split index compression contents vii string sketches english letter frequency hash functions bibliography list symbols list abbreviations list figures list tables scarecrow win nobel prize standing unknown everybody devoted precious time read work entirety chapter introduction bible consists old new testament composed roughly thousand words english language version bib literary works stature often regarded good candidates creating concordances listings words originated specific work collections usually included positions words allowed reader learn frequency context assembly task required lot effort rather favorable assumption friar today also referred research assistant would able achieve throughput one word per minute compilation confuse code generation bible would require thirteen thousand roughly one half years constant work naturally ignores additional efforts instance printing dissemination listing one earliest examples data structure constructed purpose faster searches cost space preprocessing luckily today capable building using various structures much shorter time aid silicone electrons capable human minds managed decrease times years seconds indexing seconds microseconds searching applications string searching always ubiquitous everyday life probably since creation written word modern world encounter text regular basis paper glass rubber human skin metal cement since century also electronic displays perform various operations almost time often subconsciously happens trivial situations looking interesting introduction news website slow sunday afternoon trying locate information bus timetable cold monday morning many familiar tasks finished faster thanks computers powerful machines also crucial scientific research specific areas discussed following subsections natural language years main application computers textual data natural language processing goes back work alan turing goal understand meaning well context language used one first programs could actually comprehend act upon english sentences bobrow student solved simple mathematical problems first application text processing string searching algorithms could really shine spell checking determining whether word written correct form consists testing whether word present dictionary set words functionality required since spelling errors appear relatively often due variety reasons ranging writer ignorance typing transmission errors research area started around first spell checker available application believed appeared today spell checking universal performed programs accept user input includes dedicated text editors programming tools email clients interfaces web browsers sophisticated approaches try take context account also described due fact checking dictionary membership prone errors mistyping peterson reported errors might undetected another familiar scenario searching words textual document book article allows locating relevant fragments much shorter time skimming text determining positions certain keywords order learn context neighboring words may also useful plagiarism detection including plagiarism computer programs use approximate methods similar words obtained dictionary correct spelling suggested spelling correction usually coupled spell checking may also include proper nouns example case shopping catalogs relevant products geographic information systems specific locations cities techniques also useful optical character recognition ocr serve verification mechanism applications security desirable check whether password close word dictionary data cleaning consists detecting errors duplication introduction data stored database string matching also employed preventing registration fraudulent websites similar addresses phenomenon known typosquatting may happen pattern searched explicitly specified case use web search engine would like find entire website specify keywords example information retrieval instance methods form important component architecture google engine bioinformatics biological data commonly represented textual form reason searched like text popular representations include dna alphabet four symbols corresponding nucleobases extended additional character indicating might nucleobase specified position used instance sequencing method could determine nucleobase desired certainty sometimes additional information quality read probability specific base determined correctly also stored rna four nucleobases similarly dna additional information may present proteins symbols corresponding different amino acids uppercase letters english alphabet additional symbols amino acids occurring species placeholders situations amino acid ambiguous letters english alphabet used computational information integral part field bioinformatics beginning end substantial activity development string sequence alignment algorithms rna structure prediction alignment methods allow finding evolutionary relationships genes proteins thus construct phylogenetic trees sequence similarity proteins important may imply structural well functional similarity researchers use tools blast try match string question similar ones database proteins genomes approximate methods play introduction important role related sequences often differ one another due mutations genetic material include point mutations changes single position well insertions deletions usually called indels another research area would thrive without computers genome sequencing caused fact sequencing methods read whole genome rather produce hundreds gigabytes strings dna reads whose typical length tens thousand base pairs whose exact positions genome known moreover reads often contain mistakes due imperfection sequencing goal computers calculate correct order using complicated statistical tools without reference genome latter called novo sequencing process well illustrated name shotgun sequencing likened shredding piece paper reconstructing pieces string searching crucial allows finding repeated occurrences certain patterns data also represented manipulated textual form includes music would like locate specific melody especially using approximate methods account slight variations imperfections singing pitch another field approximate methods play crucial role signal processing especially case audio signals processed speech recognition algorithms functionality becoming popular nowadays due evolution multimedia databases containing audiovisual data string algorithms also used intrusion detection systems goal identify malicious activities matching data system state graphs instruction sequences packets database string searching also applied detection arbitrary shapes images yet another application compression algorithms desirable find repetitive patterns similar way sequence searching biological data due fact almost data represented textual form many application areas exist see navarro information diversity data causes string algorithms used different scenarios pattern size vary letters words hundred dna reads input text almost arbitrary size instance google reported web search index reached thousand terabytes bytes goo massive data also present bioinformatics size genome single introduction organism often measured gigabytes one largest animal genomes belong lungfish salamander occupying approximately gbp roughly assuming base coded bits regards proteins uniprot protein database stores approximately million sequences composed roughly hundred symbols continues grow exponentially uni remarked recently biological textual databases grow quickly ability understand comes data magnitude feasible perform search meaning data preprocessed main focus thesis seems likely data sizes continue grow reason clear need development algorithms efficient practice preliminaries section presents overview data structures algorithms act building blocks ones presented later introduces necessary terminology string searching main topic thesis described following chapter throughout thesis data structures usually approached two angles theoretical concentrates space query time practical one latter focuses performance scenarios often heuristically oriented focused cache utilization reducing slow ram access worth noting theoretical algorithms sometimes perform poor practice certain constant factors ignored analysis moreover might even tested implemented hand practical evaluation depends heavily hardware peculiarities cpu cache instruction prefetching etc properties data sets used input importantly implementation moffat gog provided extensive analysis experimentation field string searching pointed various caveats include instance bias towards certain repetitive patterns patterns randomly sampled input text advantage smaller data sets increase probability least data would fit cache theoretical analysis algorithms based big family asymptotic notations including relevant lower case counterparts assume reader familiar tools complexity classes unless stated otherwise complexity analysis refers scenario logarithms assumed introduction base might also stated explicitly state complexity average worst case equal value mean running time algorithm hand time space explicitly mentioned word complexity might omitted array string vector indexes always assumed contiguous collection indexes collection strings consider standard hierarchical memory model ram faster cpu cache take granted data always fits main memory disk ignored moreover assume size data exceed bytes means sufficient pointer counter occupy bits bytes sizes given kilobytes megabytes indicated abbreviations refer standard computer science quantities rather sorting sorting consists ordering elements given set way following holds smallest element always front reverse sorting highest element front inequality sign reversed popular sorting methods include heapsort mergesort log worstcase time guarantees another algorithm quicksort average time log although worst case equal known times faster practice heapsort mergesort also exist algorithms linear used certain scenarios instance radix sort integers time complexity log radix machine word size comes sorting strings average length comparison sorting method would take log time assuming comparing two strings linear time alternatively could obtain time bound sorting letter column sorting method linear fixed alphabet essentially performing radix sort using counting sort moreover even achieve complexity building trie lexicographically ordered children level performing preorder search dfs see following subsections details comes suffix sorting sorting suffixes input text dedicated methods linear time guarantee often used due reduced space requirements good practical performance recently linear methods efficient practice also described introduction trees tree contains multiple nodes connected one node designated root every node contains zero children tree undirected graph two vertexes connected exactly one path cycles terminology relevant trees follows sec parent neighbor child located closer root vice versa sibling node shares parent leaves nodes without children graphical representation always shown bottom diagram leaves also called external nodes internal nodes nodes leaves descendants nodes located anywhere subtree rooted current node ancestors nodes anywhere path root inclusive current node proper descendants ancestors exclude current node ancestor descendant vice versa depth node length path node root height tree longest path root leaf depth deepest node maximum number children limited node many structures binary trees means every node two children generic term tree multiary full complete tree structure every node exactly case leaves case internal nodes children perfect tree tree leaves depth historical note apparently binary tree used called bifurcating arborescence early years computer science balanced tree tree whose height maintained respect total size irrespective possible updates deletions height balanced binary tree logarithmic log logk tree often desirable maintain balance otherwise tree may lose properties search complexity caused fact time complexity various algorithms proportional height tree exist many kinds trees characterized additional properties make useful certain purpose introduction binary search tree binary search tree bst used determining membership set storing pairs every node stores one value value right child always bigger value left child always smaller lookup operation consists traversing tree towards leaves either value found nodes process indicates value present bst often used maintain collection numbers however values also strings ordered alphabetically see figure crucial bst balanced otherwise scenario every node exactly one child height would linear basically forming linked list thus complexity traversal would degrade log occupied space clearly linear one node per value preprocessing takes log time insertion costs log karen alan bob tom erin sasha zelda figure binary search tree bst storing strings english alphabet value right child always bigger value parent value left child always smaller value parent trie trie digital tree tree position node specifically path root node describes associated value see figure nodes often store ids flags indicate whether given node word required nodes may intermediary associated value values often strings paths may correspond prefixes input text trie supports basic operations searching insertion deletion lookup check whether consecutive character query present trie moving towards leaves hence search complexity directly proportional length pattern order build trie perform full lookup introduction word thus preprocessing complexity equal words total length space linear one node per input character ten tea ted inn figure trie one basic structures used string searching constructed strings set inn tea ted ten edge corresponds one character strings stored implicitly shown clarity additional information ids shown inside parentheses sometimes kept nodes various modifications regular trie exist example could patricia trie whose aim reduce occupied space idea merge every node siblings parent thus reducing total number nodes resulting edge labels include characters edges merged complexities unchanged hashing hash function transforms data arbitrary size data fixed size typical output sizes include bits input principle type although hash functions usually designed work well particular kind data strings integers hash functions often certain desirable properties limited number collisions two chunks data probability relatively low called universal hash table exists group cryptographic hash functions offer certain guarantees regarding number collisions also provide means hard introduction mathematical sense example problem may deduce value input string hash value properties provided price reduced speed reason cryptographic hash functions usually used string matching perfect hash function guarantees collisions fks scheme space keys usually known beforehand although dynamic perfect hashing also considered minimal perfect hash function mphf uses every bucket hash table one value per bucket lower space bound describing mphf equal roughly bits elements complexity hash function usually linear input length although sometimes assumed takes constant time hash function integral part hash table data structure associates values buckets based key hash value represented following relation value hash tables often used string searching allow quick membership queries see figure size hash table usually much smaller number possible hash values often case collision occurs key produced two different values exist various methods resolving collisions popular ones follows chaining bucket holds list values hashed bucket probing collision occurs value inserted next unoccupied bucket may linear probing consecutive buckets scanned linearly empty bucket found quadratic probing gaps consecutive buckets formed results quadratic polynomial double hashing gaps consecutive buckets determined another hash function simple approach could instance locate next bucket index using formula mod two hash functions order resolve collisions keys usually stored well techniques try locate empty bucket opposed chaining referred open addressing key characteristic hash table load factor defined number entries divided number buckets let note open addressing performance degrades rapidly however case chaining holds entries introduction keys hash function buckets john smith lisa smith sandra dee figure hash table strings reproduced wikimedia data structure comparison previous subsections introduced data structures used sophisticated algorithms described following chapters still also used exact string searching figure present comparison complexities together linear direct access array noted even though worst case hash table lookup linear iterating one bucket stores elements extremely unlikely popular hash function offers reasonable guarantees building degenerate hash table data structure array balanced bst hash table trie lookup log avg preprocessing log space table comparison complexities basic data structures used exact string searching assume string comparison takes constant time compression compression consists representing data alternative encoded form purpose reducing size compression data decompressed decoded order obtain original representation typical applications include reducing storage sizes saving bandwidth transmission compression jorge stolfi available http introduction either lossless lossy depending whether result decompression matches original data former useful especially comes multimedia frequently used methods based human perception images lower quality may acceptable even indiscernible storing data uncompressed form often infeasible instance original size full movie bits per pixel frames per second would amount one terabyte data compressed sometimes called redundant one popular compression methods character substitution selected symbols bit replaced ones take less space classic algorithm called huffman coding offers optimal substitution method based frequencies produces codebook maps frequent characters shorter codes way every code uniquely decodable uniquely decodable data huffman coding offers compression rates close entropy see following subsection often used component complex algorithms refer reader salomon monograph information data compression entropy easily determine compression ratio taking size number occupied bits original data dividing size compressed data may seem following hold however certain algorithms might actually increase data size compressing operating inconvenient data set course highly undesirable related problem determine compressibility data optimal compression ratio highest brings notion entropy sometimes also called shannon entropy name author describes amount information contained message case strings determines average number bits required order encode input symbol specified alphabet frequency distribution means entropy describes theoretical bound data compression one exceeded algorithm higher entropy means difficult compress data multiple symbols appear equal frequency formula presented figure log figure formula shannon entropy entropy function probability symbol occurs constant introduction variation entropy used context strings called order entropy takes context preceding symbols account allows use different codes based context ignoring symbol always appears symbol shannon entropy corresponds case denoted general increase value also increase theoretical bound compressibility although size data required storing context information may point dominate space pigeonhole principle let consider situation buckets items positioned inside buckets pigeonhole principle often also called dirichlet principle states least one buckets must store one item name comes intuitive representation buckets boxes items pigeons despite simplicity principle successfully applied various mathematical problems also often used computer science example describe number collisions hash table later see pigeonhole principle also useful context string searching especially comes string partitioning approximate matching overview thesis organized follows chapter provides overview field string searching deals underlying theory introduces relevant notations discusses related work context online search algorithms chapter includes related work discusses current algorithms indexing well contribution area chapter keyword indexes chapter describes experimental setup presents practical results chapter contains conclusions pointers possible future work appendix offers information regarding data sets used experimental evaluation introduction appendix discusses complexity exact string comparison appendix discusses compression split index section detail appendix contains experimental results string sketches section used alphabet uniform letter frequencies appendix presents frequencies english alphabet letters appendix contains internet addresses reader obtain code hash functions used obtain experimental results split index section chapter string searching thesis deals strings sequences symbols specified alphabet string usually denoted alphabet length size strings alphabet arbitrary string sometimes called word confused machine word basic data unit processor defined given alphabet belongs set words specified said alphabet strings alphabets assumed finite alphabets totally ordered string specified value written teletype font abcd brackets usually used indicate character specified position index instance string text substring sometimes referred factor written inclusive range previous example single character substring length usually denoted last character indicated indicates string substring conversely indicates substring subscripts usually used distinguish multiple strings two strings may concatenated merged one recorded case removing one substring another indicated subtraction sign provided result occ occ occurrences equality sign indicates strings match exactly means following relation always holds string searching string matching refers locating substring pattern query length longer text textual data searched called input input string input text text database length denoted indicates complexity linear respect size original data pattern usually much smaller input often multiple orders string searching magnitude based position pattern text write occurs shift mentioned applications may vary see section data come many different domains still string searching algorithms operate text oblivious actual meaning data field concerning algorithms string processing sometimes called stringology two important notions prefixes suffixes former substring latter substring let observe point every substring prefix one suffixes original string well suffix one prefixes simple statement basis many algorithms described following chapters proper prefix suffix equal string strings lexicographically ordered means sorted according ordering characters given alphabet english alphabet letter comes comes etc formally two strings respective lengths min comes strings often mention lists contiguous characters strings substrings former usually used general terms latter used biological data especially dna reads length problem classification match pattern substring input text determined according specified similarity measure allows divide algorithms two categories exact approximate former refers direct matching length well characters corresponding positions must equal another relation represented formally two strings length equal simply case approximate matching similarity measured specified distance also called error metric two strings noted word approximation used strictly mathematical sense since approximate search actually harder exact one comes strings general given two strings distance minimum cost edit operations would transform vice versa edits usually defined finite set rules rule associated different cost error metrics used results string matching limited string searching substrings close pattern defined threshold report substrings metrics fixed penalties errors called maximum allowed number errors value may depend data set pattern length instance spell checking reasonable number errors higher longer words hold since otherwise pattern could match string corresponds exact matching scenario see subsection detailed descriptions popular error metrics problem searching also called lookup vary depending kind answer provided includes following operations match determining membership deciding whether decision problem consider search complexity usually implicitly mean match query count stating many times occurs refers cardinality set containing indexes equal specific values ignored scenario time complexity count operation often depends number occurrences denoted occ locate reporting occurrences returning indexes equal display showing characters located match aforementioned indexes display substrings case approximate matching might refer showing text substrings keywords string searching algorithms also categorized based whether data preprocessed one classification adapted melichar presented table offline searching also called searching preprocess text build data structure called index opposed online searching preprocessing input text takes place detailed descriptions examples classes consult chapters offline section online error metrics motivation behind error metrics minimize score strings somehow related character differences likely occur string searching text prepr yes yes pattern prepr yes yes algorithm type online online offline offline examples naive dynamic programming pattern automata rolling hash methods signature methods table algorithm classification based whether data preprocessed carry lower penalty depending application area instance case dna certain mutations appear real world much often others popular metrics include hamming distance relevant two strings equal length calculates number differing characters corresponding positions hence sometimes called problem throughout thesis denote hamming distance ham given ham ham without preprocessing calculating hamming distance takes time applications hamming distance include bioinformatics biometrics cheminformatics circuit design web crawling levenshtein distance measures minimum number edits defined insertions deletions substitutions first described context error correction data transmission must hold lev max calculation using dynamic programming algorithm takes time using min space see subsection ukkonen recognized certain properties matrix presented algorithm min time errors approximation algorithm time also described levenshtein distance sometimes called simply edit distance distance approximate matching explicitly specified assume levenshtein distance edit distances may allow subset edit actions longest common subsequence lcs restricted indels episode distance deletions another approach introduce additional actions examples include distance counts transposition one edit operation distance allows matching one character two vice versa specifically designed ocr distance weights substitutions based probability user may mistype one character another string searching sequence alignment may exist gaps characters substrings moreover certain characters may match even though strictly equal gaps lengths positions well inequality individual characters quantified score calculated using similarity matrix constructed based statistical properties elements domain question matrix sequence alignment proteins problem also formulated width gaps set positions corresponding characters hold means absolute values numerical differences certain characters exceed specified threshold sequence alignment generalization edit distance also performed multiple sequences although known regular expression matching patterns may contain certain metacharacters various meanings specify ranges characters match certain positions use additional constructs wildcard symbol matches consecutive characters type online searching section present selected algorithms online string searching divide exact approximate ones online algorithms preprocess input text however pattern may preprocessed assume preprocessing time complexity equal time required pattern preprocessing subsumed search complexity means consider scenario patterns known beforehand search time refers match query exact faro lecroq provided survey online algorithms exact matching remarked algorithms proposed since categorized algorithms following three groups character comparisons automata string searching bit parallelism naive algorithm attempts match every possible substring length pattern means iterates left right checks whether right left iteration would also possible algorithm would report results time complexity equal worst case although average see appendix information preprocessing space overhead even without text preprocessing performance naive algorithm improved significantly taking advantage information provided mismatches text pattern classical solutions matching include kmp algorithm kmp uses information regarding characters appear pattern order avoid repeated comparisons known naive approach reduces time complexity worst case cost space mismatch occurs position pattern algorithm shifts length longest proper prefix also suffix instead position starts matching position instead information regarding precomputed stored table size let observe algorithm skip characters input string interestingly navarro reported practice kmp algorithm roughly two times slower search although depends alphabet size algorithm hand omits certain characters input begins matching end pattern allows forward jumps based mismatches thanks preprocessing size shift determined constant time one two rules jumping called bad character rule given aligns rightmost occurrence shifts pattern rule complex good suffix rule whose description omit also part bmh algorithm uses bad character rule good suffix rule requires extra cost compute often practical time complexity algorithm equal min average holds bmh average number comparisons equal roughly improved achieve linear time worst case introducing additional rules string searching one algorithms developed later algorithm starts calculating hash value pattern preprocessing stage compares hash every substring text sliding similar way naive algorithm verification takes place two hashes equal trick use hash function computed constant time next substring given output previous substring next character socalled rolling hash viz simple example would simply add values characters exist functions rabin fingerprint treats characters polynomial variables indeterminate fixed base algorithm suitable matching since quickly compare hash current substring hashes patterns using efficient set data structure way obtain average time complexity assuming hashing takes linear time however still equal worst case hashes match verification required another approach taken algorithm builds finite state machine fsm automaton finite number states structure automaton resembles trie contains edges certain nodes represent transitions constructed queries attempts match queries sliding text transitions indicate next possible pattern still fully matched mismatch specified position occurs search complexity equal log means linear respect input length length patterns building automaton number occurrences example algorithm algorithm gonnet aims speed comparisons pattern length smaller machine word size usually equal bits preprocessing mismatch mask computed character alphabet otherwise moreover maintain state mask initially set holds information matches far proceed similar manner naive algorithm trying match pattern every substring instead comparisons use bit operations step shift state mask left current character match reported significant bit equal provided time complexity equal masks occupy space based practical evaluation faro lecroq reported superior algorithm effectiveness depends heavily size pattern string searching size alphabet differences performance substantial algorithms fastest short patterns often among slowest long patterns vice versa approximate following paragraphs use denote complexity calculating distance function two strings consult subsection description popular metrics navarro presented extensive survey regarding approximate online matching categorizes algorithms four categories resemble ones presented exact scenario dynamic programming automata bit parallelism filtering naive algorithm works similar manner one exact searching compares pattern every possible substring input text forms generic idea adapted depending edit distance used reason time complexity equal oldest algorithms based principle dynamic programming means divide problem subproblems solved answers stored order avoid recomputing answers applicable subproblems overlap one examples algorithm originally designed compare biological sequences starting first character strings successively considers possible actions insertion mis match deletion constructs matrix holds alignment scores calculates global alignment use substitution matrix specifies alignment scores penalties situation scores triplet equal gaps matches mismatches respectively corresponds directly levenshtein distance consult figure method invoked input text one string pattern closely related variation algorithm algorithm also identify local global alignments allowing negative scores means alignment cover entire length string searching text therefore suitable locating pattern substring algorithms adapted distance metrics manipulating scoring matrix example assigning infinite costs order prohibit certain operations time complexity approaches equal possible calculate using min space despite simplicity methods still popular sequence alignment might relatively fast practice report true answer problem crucial quality alignment matters figure calculating alignment levenshtein distance using wunsch algorithm follow path corner selecting highest possible score underlined optimal global alignment follows text taxi gaps multiple dynamic programming algorithms proposed years gradually tightened theoretical bounds difference lies mostly flexibility possibility adapted distance metrics well practical performance notable results edit distance include algorithm average time using space algorithm time periodic patterns otherwise taking space refers occurrences certain substrings pattern text analysis rather lengthy lcs metric grabowski provided algorithm log log time bound linear space significant achievement automata category algorithm uses four russians technique consists partitioning matrix blocks precomputing values possible block using lookup table implicitly constructs automaton state corresponds values matrix obtain log expected time bound using space regards bit parallelism myers presented calculation matrix average time important category formed filtering algorithms try identify parts input text possible match substrings pattern string searching parts text rejected algorithm used remaining parts numerous filtering algorithms proposed one significant algorithm time bound error level holds large regards problem notable example porat algorithm answer locate query log time refined log word ram model log recently clifford described algorithm search time complexity log polylog offline searching online search often infeasible data since time required one lookup might measured order seconds caused fact online method access least characters input text normally holds thesis focused offline methods data structure index indexes indices opt former term built based input text order speed searches classic example data preprocessing justified even preprocessing time long since text often queried multiple patterns indexes divided two following categories indexes keyword dictionary indexes former means search substring input text string matching text matching whereas latter operates individual words word matching keyword matching dictionary matching matching dictionaries keyword indexes usually appropriate exist boundaries keywords often simply called words instance case natural language dictionary individual dna reads worth noting number distinct words almost always smaller total number words dictionary taken document set documents heaps law states text size empirical constant usually interval keyword indexes actually related often based similar concepts pigeonhole principle may even use kind underlying data structure string searching indexes divided static dynamic ones depending whether updates allowed initial construction another category formed external indexes optimized respect disk aim efficient data fit main memory also distinguish compressed indexes see subsection information compression store data encoded form one goal reduce storage requirements still allowing fast searches especially compared scenario naive decompression whole index performed hand also possible achieve space saving speedup respect uncompressed index achieved mostly due reduced rather surprisingly fewer comparisons required compressed data navarro note successful indexes nowadays obtain almost optimal space query time compressed data structure usually also falls category succinct data structure rather loose term commonly applied algorithms employ efficient data representations respect space often close theoretic bound thanks reduced storage requirements succinct data structures process texts order magnitude bigger ones suitable classical data structures term succinct may also suggest required decompress entire structure order perform lookup operation moreover certain indexes classified means implicitly store input string words possible transform decompress index back thus index essentially replace text main advantage indexes compared online scenario fast queries however naturally comes price indexes might occupy substantial amount space sometimes even orders magnitude input expensive construct often problematic support functionality approximate matching updates still navarro point spite existence fast practical theoretical point view online algorithms data size often renders online algorithms infeasible even relevant year methods explored detail following chapters indexes chapter keyword indexes chapter experimental evaluation contributions found chapter chapter indexes indexes allow searching arbitrary substring input text formally string length set substrings given alphabet index supporting matching specified distance query pattern returns substrings exact matching following sections describe data structures category divided exact section approximate ones section contribution field presented subsection describes variant called exact suffix tree suffix tree introduced weiner trie see subsection stores suffixes input string suffixes total string length moreover suffix tree compressed context means node one child merged child shown figure searching pattern takes time since proceed way similar search regular trie suffix trees offer lot additional functionality beyond string searching calculating compression searching string repeats takes linear space respect total input size uncompressed however occupies significantly space original string implementation around bytes average practice even worst case might bottleneck dealing massive data moreover space complexity given bits actually equal indexes log also case suffix array rather log required store original text comes preprocessing exist algorithms construct linear time regards implementation important consideration represent children node straightforward approach storing linked list would degrade search time since order achieve overall time able locate child constant time accomplished example hash table offers average time lookup banana banana ana ana nana anana figure suffix tree stores suffixes text banana appended terminating character prevents situation suffix could prefix another suffix common variation called generalized suffix tree refers stores multiple strings suffixes string additional information identifies string stored nodes complexities regular compressed suffix trees reduce space requirements also described usually based compressed suffix array suffix array suffix array comes manber myers stores indexes sorted suffixes input text see figure example according suffix arrays perform comparably suffix trees comes string indexes matching however slower kinds searches regular expression matching even though takes space original string bytes basic form original string stored well significantly smaller suffix tree better locality properties search takes log time since perform binary search suffixes comparison takes time although comparison constant average see appendix space complexity equal suffixes store one index per suffix possible construct linear time see puglisi extensive survey multiple construction algorithms practical evaluation concludes algorithm maniscalco puglisi fastest one parallel construction algorithms use gpu also considered let point similarity since sorted suffixes correspond traversal main disadvantage respect lack additional functionality mentioned previous subsection suffix ana anana banana nana index figure suffix array stores indexes sorted suffixes text banana suffixes stored explicitly although entire input text stored modifications multiple modifications original proposed years aim either speed searches storing additional information reduce space requirements compressing data omitting subset data notable examples presented enhanced suffix array esa variant additional information form longest common prefix lcp table stored suffix array string length lcp table holds integers range following properties holds length longest common prefix suffixes esa essentially replace suffix tree since offers functionality deal problems indexes time complexity although constant alphabet size assumed analysis certain applications also required store transform see subsection input string inverse suffix array size index always reduced using compression method however naive approach would certainly negative impact search performance overhead associated decompression much better approach use dedicated solution presented compact suffix array cosa average search time length cosa practical space reduction replacing repetitive suffixes links suffixes grossi vitter introduced compressed suffix array csa uses log bits instead log bits based transformation array points position next suffix text instance text banana suffix ana next suffix see figure transformed values compressible certain properties fact number increasing sequences search takes time relation search time space using certain parameters information compressed indexes including modifications refer reader survey navarro presented subsection also regarded compressed variant suffix ana anana banana nana index csa index figure compressed suffix array csa text banana stores indexes pointing next suffix text shown clarity stored along csa sparse suffix array stores suffixes located positions form fixed value order answer query searches explicit verifications required must hold another notable example modified suffix array stores subset data sampled suffix array idea select subset alphabet denoted extract corresponding substrings text array constructed indexes suffixes start symbol chosen subalphabet although sorting performed full suffixes part pattern contains character searched one search total matches verified comparing rest pattern text disadvantage following must hold practical reduction space order reported recently grabowski raniszewski proposed alternative sampling technique based minimizers see section allows matching patterns minimizer window length requires one search structures suffix tray combines name suggests suffix tree suffix array structure whose nodes divided heavy light depending whether subtrees fewer leaves predefined threshold light children heavy nodes store corresponding interval query time equals log preprocessing space complexities equal authors also described dynamic variant called suffix trist allows updates yet another modification classical called suffix cactus reworks compaction procedure part construction instead collapsing nodes one child every internal node combined one children various methods selecting child exist alphabetical ordering thus take multiple forms input string original article reports best search times dna whereas performed worse english language random data space complexity equal compressed succinct index introduced ferragina manzini year applied variety situations instance sequence assembly ranked document retrieval multiple modifications described throughout years introduced following subsections strength original lies fact occupies less space input text still allowing fast queries search time unmodified version linear respect pattern length although alphabet assumed space complexity indexes equal log log log bits per input symbol taking alphabet size account grabowski provide accurate total size bound log log log logn bits transform based transform bwt ingenious method transforming string order reduce entropy bwt permutes characters way duplicated characters often appear next allows easier processing using methods encoding case compressor sew importantly transformation reversible opposed straightforward sorting means extract original string permuted order bwt could also used compression based order entropy described subsection since basic context information extracted bwt however loss speed renders approach impractical order calculate bwt first append special character describe practice character order indicate end character lexicographically smaller next step take rotations rotations total sort lexicographic order thus forming bwt matrix denote first column sorted characters last column result bwt bwt order finish transform take last character rotation demonstrated figure let note similarities bwt suffix array described subsection since sorted rotations correspond directly sorted suffixes see figure calculation takes time assuming prefixes sorted linear time space complexity naive approach equal linear optimized order reverse bwt first sort characters thus obtain first column matrix point two columns namely first last one means also character original string sorting gives first second column proceed manner later sort etc reach thus reconstruct whole transformation matrix point found row last character equal indexes figure calculating transform bwt string pattern appended terminating character required reversing transform rotations already sorted result last column bwt pattern nptr eta operation important aspects follows count table describes number occurrences lexicographically smaller characters see figure rank operation counts number set bits bit vector certain position assume included well rank select operation used variants rlfm reports position set bit bit vector select note rank select operations generalized finite alphabet perform search using iterate pattern characterwise reverse order maintaining current range initially start last character pattern range covers whole input string input string corresponds bwt text bwt step update using formulae presented figure size range last iteration gives number occurrences turns point mechanism also known efficiency see performance lookup rank crucial complexity search procedure particular operations constant search takes indexes bwt suffix attern ern pattern tern ttern figure relation bwt string pattern appended terminating character let note corresponds last character character position bwt character preceding suffix located position figure count table part text mississippi entries describe number occurrences lexicographically smaller characters instance letter occurrences occurrence hence worth noting actually compact representation column rank rank figure formulae updating range search procedure fmindex current character count table rank invoked bwt counts occurrences current character time count table simply precompute values store array size lookup regards rank naive implementation would iterate whole array would clearly take time hand precompute values would store table size one popular solutions efficient rank uses two structures introduced following paragraphs rrr authors names raman raman rao data structure answer rank query time bit vectors providing compression time divides bit vector size blocks size groups consecutive blocks one superblock see figure block store weight describes number set bits offset describes position table maximum value depends store value rank index indexes see figure means keep entries size consecutive weights scheme provides compression respect storing bits explicitly achieve query time storing rank value superblock thus search iterate blocks constant space complexity equal log log log bits figure example rrr blocks first superblock equal second superblock equal offset block value rank figure example rrr table number block values length weight equal rank presented successive indexes block values stored explicitly wavelet tree grossi balanced tree data structure stores hierarchy bit vectors instead original string allows use rrr bit vector efficient rank operation starting root recursively partition alphabet two subsets equal length number distinct characters even reach single symbols stored leaves characters belonging first subset indicated characters belonging second subset indicated consult figure example thanks implement rank query fixed size alphabet log time assuming binary rank calculated constant time since height tree equal log given character query node proceed left right depending subset belongs subsequent rank called form rank result rank previous level ferragina described generalized wts instance multiary log log log traversal time consult bowe thesis information practical evaluation flavors multiple flavors proposed years goal decreasing query time time without dependence reducing occupied space structures provide asymptotically optimal bounds often indexes abracadabra abaaaba rcdr rdr figure wavelet tree string abracadabra alphabet divided two subsets level corresponding one subset practical due large constants involved reason many authors focus practical performance structures usually based fast rank operation take advantage compressed representations bit vectors following paragraphs present selected examples consult navarro extensive survey compressed indexes discusses whole family one notable examples query time depend alphabetindependent grabowski idea first compress text using huffman coding apply bwt transform obtaining bit vector vector used searching manner corresponding fmindex array stores number zeros certain position relation rank bwt replaced rank rank rank length text compressed huffman space complexity equal bits average search time equal reasonable assumptions practical front grabowski recently described rank cache miss moreover proposed index several indexes variants instance one stores separate bit vector alphabet symbol vectors total variants include using certain dense codes well using multiary wavelet trees different arity values wavelet tree unbalanced paths frequent characters shorter translates smaller number rank queries bit vectors moreover operations performed manner regular wavelet tree faster average reported search times times faster methods cost using times space data structures concentrate reducing space requirements rather query time include compressed bit vectors different compression methods used blocks depending type block instance encoding blocks small number runs another notable example huo encodes bit vectors resulting using gamma coding kind coding thus obtain one best compression ratios practice binary rank described previous subsection order achieve good overall performance sufficient design data structure supports efficient rank query bit vectors thanks use wavelet tree rrr notable example jacobson originally showed possible obtain rank operation using extra bits holds select vigna proposed interleave store next one another blocks superblocks concepts introduced rrr structure uncompressed bit vectors order reduce number cache translation lookaside buffer tlb misses extended gog petri showed better practical performance using slightly different layout counters gonzalez navarro provided discussion dynamic scenario insertions deletions vector allowed obtain space bound log bits log log log log time operations queries updates one crucial issues comes performance number cpu cache misses occur search comes fact order calculate access bwt sequence often required order misses search pattern length indexes even small alphabet problem cache misses backward search identified main performance limiter proposed perform several symbols time practice dna alphabet scheme described solution allowed example improve search speed factor price occupying roughly times size address problem cache misses pattern search count query way related solution also work yet algorithmic details different two following subsections describe two variants approach experimental results found section superlinear space variation aims speed queries cost additional space start calculating bwt input string way regular however difference operate rather individual characters count table stores results sampled bwt matrix case power predefined value qmax instance namely suffix take following form etc extracted reach qmax one contains terminating character terminating character discarded consult figure example let denote collection distinct item create list occurrences sorted suffix order simply called order resembles inverted index yet main difference elements lists arranged rather text order figure list occurrences corresponding rows would follows row bwt pattern attern ern patt patter pattern patte tern pat ttern tern patt tter atte figure extraction structure superlinear space text pattern extracted qmax indexes given pattern start longest suffix qmax following backward steps deal remaining prefix similar way note number steps equal number binary representation order log power two result match count queries reported constant time simply return qmax bigger overall index size bigger search faster patterns sufficient length allows farther jumps towards beginning pattern representation step translates performing two predecessor queries list naive solution binary search log time even linear search may faster list short yet predecessor query also handled log log time using trie hence overall average search complexity equal log log log log log log cache misses cache line size bytes provided symbol pattern occupies one byte regards space complexity total log occurrences log positions rows bwt matrix hence total length occurrence lists equal log total complexity equal bits since need log bits store one position bwt matrix rows regards implementation language focus data compaction acts key hash table collisions resolved chaining stored implicitly pointer length pair pointer refers original string values hash table include count list occurrences stored one contiguous array use binary search calculating rank lists whose length greater equal empirically determined value linear search otherwise linear space variant instead extracting etc row bwt matrix extract selected help minimizers consult subsection description minimizers first step calculate minimizers input text fixed parameters lexicographic ordering ties resolved favor leftmost smallest substrings next store count table occurrence lists single characters way regular using wavelet tree moreover store information counts occurrences located minimizers set referred phrases set minimizer indexes consecutive phrases constructed following indexes manner consult figure worth noting approach resembles recently proposed samsami index sampled suffix array minimizers phrases phrase ranges appearance appe figure constructing phrases text appearance use search proceeds follows calculate minimizers pattern search pattern suffix starting position rightmost minimizer using regular mechanism processing character time operate phrases minimizers rather individual characters search performed way superlinear variant turns phrase faster mechanism single characters used search pattern prefix starting position leftmost minimizer using regular mechanism processing character time use minimizers ensures phrases selected way selected index construction overall average search complexity equal log log assuming trie used space complexity linear approximate navarro provided extensive survey indexes approximate string matching categorized algorithms three categories based search procedure indexes neighborhood generation strings given pattern searched directly partitioning exact searching pies substrings pattern searched exact manner matches extended approximate matches intermediate partitioning substrings pattern searched approximately fewer number errors method lies two ones neighborhood generation approach generate pattern contains strings could possible matches specified alphabet alphabet finite amount strings finite well strings searched using exact index suffix tree suffix array main issue fact size grows exponentially means basically factors especially small suffix tree used index input text cobbs proposed solution reduces amount nodes processed runs time occupies space depends problem instance size output pattern partitioned searched exactly pies store index answer exact queries let note approach based pigeonhole principle context approximate string searching means given least one parts average length must match text exactly generally parts match parts created value large otherwise could case substantial part input text verified especially pattern small alternatively pattern divided overlapping searched using locate query index extracted text see figure example extraction stored index situated fixed positions interval must hold occurrences contain samples sutinen tarhio suggested optimal value order turns positions subsequent may correspond match explicit verification performed similarly scenario index used order answer exact queries let note approach pattern substring lookup verification also used exact searching indexes case intermediate partitioning split pattern pieces search pieces approximate manner using neighborhood generation case corresponds pure neighborhood generation whereas case almost like pies general method requires searching less verification compared pies thus lies two approaches previously described consult nowak order see detailed comparison complexities modern text indexing methods approximate matching notable structures theoretical point view include trie cole based suffix tree lcp structure see subsection description lcp used various contexts including keyword indexing well wildcard matching indexing problem uses logk space offers logk occ query time extended tsur described structure similar one cole time complexity log log occ constant space constant regards solution dedicated hamming distance gabriele provided index average search time occ logl space let note indexes usually easily adapted keyword matching scenario described following chapter interesting category data structures indexes used approximate matching especially sequence alignment employ heuristic approaches order speed searching reason guaranteed find optimal match means also approximate mathematical sense return true answer problem popularity especially widespread context bioinformatics massive sizes databases often force programmers use efficient filtering techniques notable examples include blast fasta tools blast blast stands basic local alignment search tool published altschul purpose comparing biological sequences see subsection information biological data name may refer indexes algorithm whole suite string searching tools bioinformatics based said algorithm blast relies heavily various heuristics reason highly domain specific fact exist various flavors blast different data sets instance one protein data blastp one dna blastn another notable modification combined dynamic programming order identify distant protein relationships basic algorithm proceeds follows certain regions removed pattern include repeated substrings regions low complexity measured statistically using dust dna create set containing overlaps available see figure extracted pattern scored possible precomputed ones highest scores retained creating candidate set word searched exact manner database using instance inverted index see subsection exact matches create seeds later used extending matches seeds extended left right long alignment score increasing alignment significance assessed using statistical tools size tex ext xti tin ing text exti xtin ting texti extin xting figure selecting overlapping shift text texting must always hold general blast faster alignment algorithms algorithm see subsection due heuristic approach however comes price reduced accuracy shpaer state substantial chance blast miss distant sequence similarity moreover implementations created certain cases match performance blast still blast currently common tool sequence alignment using massive biological data openly available via website bla means conveniently run without consuming local resources chapter keyword indexes keywords indexes operate individual words rather whole input string formally collection strings words total length given alphabet keyword index supporting matching specified distance query pattern returns words exact matching approximate dictionary matching introduced minsky papert following sections describe algorithms category divided exact section approximate ones section contribution field presented subsection describes index approximate matching mismatches especially mismatch exact goal support match query finite number keywords could use efficient set data structure hash table trie see subsections order store keywords boytsov reported depending data set either one two may faster order reduce space requirements could use minimal perfect hashing see subsection could also compress entries buckets bloom filter alternatively could provide approximate answers mathematical sense order occupy even less space relevant data structure bloom filter keyword indexes probabilistic data structure possible false positive matches false negatives adjustable error rate uses bit vector size bits initially set element hashed different hash functions form lookup performed queried element hashed functions checked whether case possible match reported consult figure broder mitzenmacher provided following formula expected false positive rate size filter bits number elements note example false positive probability slightly recently fan described structure based cuckoo hashing takes even less space supports deletions unlike figure bloom filter approximate membership queries holding elements set element set since reproduced wikimedia inverted index inverted index keyword index contains mapping words lists store positions occurrences text positions instance indexes string characters approach sufficient could identify individual documents databases see figure example single input string positions allow search whole phrase multiple words searching word separately checking whether positions describe consecutive words text looking list intersections shift could also used searching query may cross boundaries words searching substrings pattern comparing respective positions consult section information means goal inverted index support various kinds queries locate see section efficiently david eppstein available http file public domain keyword indexes word banana occurrence list figure inverted index stores mapping words positions text banana banana main advantage inverted index fast queries answered constant average time using example hash table inverted index rather generic idea means could also implemented data structures binary trees hand substantial space overhead order original string stored well reason one key challenges inverted indexes succinctly represent lists positions still allowing fast access multiple methods proposed often combined one popular one store gaps differences subsequent positions index figure list banana would equal instead values gaps usually smaller original positions reason stored using fewer amount bits another popular approach use coding byte contains flag set number bigger equal fit bits seven bits used data number fit bits stored original byte algorithm tries store remaining bits next byte proceeding whole number exhausted order reduce average length bits occurrence list one could also divide original text multiple blocks fixed size instead storing exact positions block indexes stored index retrieved word searched explicitly within block size data massive infeasible construct single index often case web search engines sometimes relevant data selected stored index thus forming pruned index approximate boytsov presented extensive survey keyword indexes approximate searching including practical evaluation divided algorithms two following categories keyword indexes direct methods like neighborhood generation see section certain candidates searched exactly filtering methods dictionary divided many disjoint overlapping clusters search query assigned one several clusters containing candidate strings thus explicit verification performed fraction original dictionary notable results theoretical point view include trie cole already mentioned previous chapter hamk ming distance dictionary matching uses logk space offers log log log occ query time also holds edit distance larger constants another theoretical work describing algorithm similar split index describe subsection given shi widmayer obtained preprocessing time space complexity expected time bounded log introduced notion home strings given set strings contain exact form value set search phase partition disjoint use candidate inspection order speed finding matches edit distance errors practical front bocek provided generalization fraenkel algorithm called fastss check two strings match errors first delete possible ordered subsets symbols conclude may edit distance intersection resulting lists strings explicit verification still required instance abbac neighborhood follows abbac bbac abac abac abbc abba abb aba abc aba abc aac bba bbc bac bac course resulting strings repeated may removed baxcy respective neighborhood contain string bac following verification show edit distance greater however lev impossible neighborhood least one string neighborhood hence never miss match lookup requires kmk log time average word length dictionary index occupies space another practical filter presented karch improved fastss method reduced space requirements query time splitting long words similarly fastblockss variant original method storing neighborhood implicitly indexes pointers original dictionary entries claimed faster approaches aforementioned fastss keyword indexes recently chegrane belazzougui described another practical index reported better results compared karch structure based dictionary belazzougui edit distance see following subsection approximate mathematical sense data structure approximate matching based bloom filter see subsection also described problem important consider methods detecting single error since errors even roughly within edit distance transpositions belazzougui venturini presented compressed index whose space bounded terms order empirical entropy indexed dictionary based either perfect hashing occ query time compressed permuterm index min log log occ time logc constant improved space requirements former compressed variant dictionary presented belazzougui based neighborhood generation occupies log space answer queries time chung showed theoretical work external memory used focus operations limited number operations size machine word number words within block basic unit structure occupies blocks category filters mor fraenkel described method based problem yao yao described data structure binary strings fixed length log log query time log space requirements later improved brodal data structure query time occupies space improved structure query time log space cell probe model memory accesses counted another notable example recent theoretical work chan lewenstein introduced index optimal query time uses additional bits space beyond dictionary assuming alphabet permuterm index permuterm index keyword index supports queries one wildcard symbol idea store rotations given word appended terminating keyword indexes character instance word text index would consist following permuterm vocabulary text ext tex text comes searching query first rotated wildcard appears end subsequently prefix searched using index could example trie data structure supports prefix lookup main problem standard permuterm index space usage number strings inserted data structure number words multiplied average string length ferragina venturini proposed compressed permuterm index order overcome limitations original structure respect space explored relation permuterm index bwt see subsection applied concatenation strings input dictionary provided modification known order support functionality permuterm index split index one practical approximate indexes described thesis author grabowski experimental results structure found section indexes supporting approximate matching tend grow exponentially maximum number allowed errors also worthwhile goal design efficient indexes supporting small reason focus problem dictionary matching mismatches especially one mismatch ham collection words pattern hamming distance ham algorithm going present uncomplicated based dirichlet principle ubiquitous approximate string matching techniques partition word disjoint pieces average length hence name split index piece acts key hash table size piece word determined using following formula depending practical evaluation piece size rounded nearest integer last piece covers characters pieces means pieces might fact unequal length values lists words one pieces corresponding key way every word occurs exactly lists seemingly bloats space usage still case small occupied space acceptable moreover instead storing full words respective lists store missing prefix suffix instance word table would keyword indexes relation tab one list tab would key would value tab case first populate list pieces without prefix pieces without suffix additionally store position list index latter part begins way traverse half list average search also support larger case ignore piece order list store bits piece indicate part word list key let note approach would also work however turned less efficient regards implementation language focus data compactness hash table store buckets contain word pieces keys pointers lists store missing pieces word tab pointers always located right next keys means unless unlucky specific pointer already present cpu cache traversal memory layouts substructures fully contiguous successive strings represented multiple characters prepended counter specifies length counter value indicates end list traversal length compared length piece pattern mentioned words partitioned pieces fixed length means average calculate hamming distance half pieces list since rest ignored based length hash function strings used two important considerations speed number collisions since high number collisions results longer buckets may turn negative effect query time subject explored detail along results chapter figure illustrates layout split index preprocessing stage proceeds follows duplicate words removed dictionary following steps refer word word split pieces piece create new list containing missing pieces later simply referred missing piece case always one contiguous piece add hash table append pointer bucket otherwise append missing pieces already existing list keyword indexes figure split index keyword indexing shows insertion word table index also stores words left tablet selected lists containing pieces two words shown indicate pointers respective lists first cell list indicates word position word count left missing prefixes begin hence deal two parts namely prefixes suffixes means list missing suffixes adapted wikimedia regards search pattern split pieces search piece prefix suffix list retrieved hash table continue traverse missing piece verification performed result returned ham pieces combined one word order form answer jorge stolfi available http keyword indexes complexity let consider average word length average time complexity preprocessing stage equal allowed number errors total input dictionary size length nation words word piece either add missing pieces new list append already existing one time let note assume adding new element bucket takes constant time average calculation hashes takes time total true irrespective list layout used two layouts see preceding paragraphs occupied space equal piece appears exactly lists exactly bucket average search complexity equal average length list search pieces pattern length list corresponding piece found traversed verifications performed verification takes min time dmax longest word dictionary time theory using old technique landau vishkin log preprocessing time average assume determining location specific list iterating bucket takes time average regards list average length higher higher probability two words two parts length match exactly since words sampled alphabet depends alphabet size still dependence rather indirect dictionaries store words given language rather dependent order entropy language compression order reduce storage requirements apply basic compression technique find frequent word collection replace occurrences lists unused symbols byte values values specified preprocessing stage instance reasonable english alphabet dna respectively different values also combined depending distribution input text may try possible combinations certain value select ones provide best compression case longer encoded shorter ones example word compression could encoded using following substitution keyword indexes list com sion note substitution list used possibly even recursive approach could applied although would certainly substantial impact query time see section experimental results discussion space usage could reduced use different character encoding dna assuming symbols would sufficient use bits per character basic english alphabet bits latter case letters simplified text augmented space character punctuation marks capital letter flag approach would also beneficial space compaction could positive impact cache usage compression naturally reduces space increasing search time sort middle ground achieved deciding additional information store index instance length encoded compressed piece decoding could eliminate pieces based size without performing decompression explicit verification parallelization algorithm could sped means parallelization since index access search procedure straightforward approach could simply distribute individual queries multiple threads variation would concurrently operate word pieces word split number pieces dependent parameter could even access parallel lists contain missing pieces prefixes suffixes although gain would probably limited since lists usually store words sufficient amount threads disposal approaches could combined still noted use multiple threads negative effect cache utilization inverted split index split index could extended order include functionality inverted index approximate matching mentioned subsection inverted index could practice data structure supports efficient word lookup let consider compact list layout split index presented figure piece located right next pieces instead storing counter specifies length piece could also store right next piece position text approach would increase average length list keyword indexes constant factor would break contiguity lists also keeping space complexity moreover position already present cpu cache list traversal keyword selection keyword indexes also used scenario explicit boundaries words case would like select keywords according set rules form dictionary input text index stores sampled input text may referred index useful answering keyword rather queries might required example due time requirements would like trade space speed examples input easily divided words include natural languages chinese possible clearly distinguish words boundaries depend context kinds data complete genome let consider input text divided issue lies amount space occupied tuples identifies positions order nqmax possible qmax value general compression techniques usually sufficient thus dedicated solution required especially case context bioinformatics data sets substantial applications could instance retrieving seeds algorithm described section one approaches proposed kim aims eliminate redundancy position information consecutive grouped subsequences identified position subsequence within documents position within subsequence forms index structure concept also extended original authors include functionality approximate matching minimizers idea minimizers introduced roberts applications genome sequencing bruijn graphs counting consists storing selected rather input text goal choose given string set two strings holds pattern keyword indexes threshold also hold order find slide window length consecutive shifting character time window position select smallest one lexicographically ties may resolved instance favor leftmost smallest substrings figure demonstrates process figure selecting underlined choosing sliding window length text texting results belong following set let repeat important property minimizers makes useful practice two strings holds guaranteed share share one full window means certain applications still ensure exact matches overlooked storing minimizers rather string sketches introduce concept string sketches whose goal speed string comparisons cost additional space given string sketch constructed using function returns block data particular two strings would like determine certainty ham comparing sketches exists similarity sketches hash functions however hash comparison would work context exact matching sketch comparison decisive still perform explicit verification sketches allow reducing number verifications since sketches refer individual words relevant context keyword indexes assuming word stored along sketches could especially useful queries known advance relatively high since sketch calculation might sketches use individual bits order store information frequencies string various approaches exist main properties said include keyword indexes size instance individual letters sensible english alphabet pairs might better dna frequency store binary information bit indicates whether certain appears string total sketch call approach occurrence sketch store count using per total sketch call approach count sketch selection included sketches could instance occur commonly sample text instance let consider occurrence sketch built common letters english alphabet namely consult appendix see frequencies word instance sketch bit corresponds one letters aforementioned set would follows quickly compare two sketches taking binary xor operation counting number bits set result calculating hamming weight note determined constant time using lookup table size bytes sketch size bytes denote sketch difference let note determine number mismatches instance run ran might equal occurrence differences still one mismatch extreme two strings length string consists repeated occurrence one different letter might equal number mismatches general used provide lower bound true number errors sketches record information single characters following holds ham dhs side calculated quickly using lookup table since true number mismatches underestimated especially count sketches since calculate hamming weight instead comparing counts instance count bits count bits difference instead still even though true error higher sketches used order speed comparisons certain strings compared rejected constant time using fast bitwise operations array lookups regards space overhead incurred sketches equal since store one sketch per word together lookup tables used speed processing consult section order see experimental results chapter experimental results results obtained machine following specifications processor intel running ghz ram memory operating system ubuntu kernel version programs written programming language certain prototypes python language using features standard use standard library boost libraries version linux system libraries correctness analyzed using valgrind tool error checking profiling errors memory leaks reported source code compiled version clang compiler turned produce slightly faster executable gcc checked optimization flag description structure consult subsection present experimental results superlinear index version regards hash function xxhash used available internet consult appendix load factor equal length pattern crucial impact search time since number steps equal number binary representation means search fastest form constant time experimental results certain maximum value slowest form see figure query time also generally decreases pattern length increases mostly due fact times given per character results average times calculated one million queries extracted input text query time per char pattern length figure query time per character pattern length english text size let point notable differences pattern lengths also compare approach structures consult figure used implementations sdsl library available internet sds implementations structures grabowski available internet ran regards space structure name suggests roughly two order magnitude bigger indexes index size methods ranged approximately input text size hand occupied amount space equal almost qmax split index section present results appeared preprint thesis author grabowski description split index consult subsection experimental results superlinear huffman csa compressed bit vector query time per char pattern length figure query time per character pattern length different methods english text size note logarithmic one crucial components split index hash function ideally would like minimize average length bucket let recall use chaining collision resolution however hash function also relatively fast calculated parts pattern total length investigated various hash functions turned differences query times negligible although average length bucket almost cases relative differences smaller see table fastest function xxhash available internet consult appendix reason used calculation results hash function xxhash sdbm superfast city farsh farm query time table evaluated hash functions search times per query english dictionary size list common english misspellings used queries max experimental results decreasing value load factor strictly provide speedup terms query time demonstrated figure explained fact even though relative reduction number collisions substantial absolute difference equal collisions per list moreover higher pointers lists could possibly closer might positive effect cache utilization best query time reported maximum value hence value used calculation results index size query time load factor load factor figure query time index size load factor english dictionary size list common english misspellings used queries value higher use chaining collision resolution table see linear increase index size exponential increase query time growing even though concentrate promising results reported case index might remain competitive also higher values query time index size table query time index size error value english language dictionary size list common english misspellings used queries substitution coding provided reduction index size cost increased query time generated separately dictionary list experimental results provided best compression minimized size encoded words english language dictionaries also considered using dna maximum since mixing various sizes negative impact query time dna queries generated randomly introducing noise words sampled dictionary length equal length particular word errors inserted probability english dictionaries opted list common misspellings results similar case randomly generated queries evaluation run times results averaged see relation english dictionaries figure dna figure case english using optimal compression point view minimizing index size combination mixed provided almost index size using substitution coding methods performed better dna sequences repetitive let note compression provided higher relative decrease index size respect original text size dictionary increased instance dictionary size compression ratio equal query time still index size query time around consult appendix information compression dictionary size mixed dictionary size figure query time index size dictionary size without coding mixed refer combination provided best compression three dictionaries equal grams respectively english language dictionaries list common english misspellings used index size query time experimental results dictionary size mixed dictionary size figure query time index size dictionary size without coding mixed refer combination provided best compression equal grams due computational constraints calculated first dictionary used four dictionaries dna dictionaries randomly generated queries used tested english language dictionaries promising results reported compared methods proposed authors others consider levenshtein distance edit distance whereas use hamming distance puts advantageous position still provided speedup significant believe restrictive hamming distance also important measure practical use see subsection information implementations authors available internet boy che regards results reported boytsov reduced alphabet neighborhood generation possible accurately calculate size index implementations boytsov reason used rough ratios based index sizes reported boytsov similar dictionary sizes let note compare algorithm chegrane belazzougui published better results compared karch turned claimed faster methods managed identify indexes matching dictionaries fixed alphabet dedicated hamming distance could directly compared split index times algorithm listed since roughly orders magnitude higher ones presented consult figure details also evaluated different word splitting schemes instance one could experimental results method method compression chegrane belazzougui boytsov query time index size figure query time index size different methods method compression encoded mixed used hamming distance authors used levenshtein distance english language dictionaries size used input list common misspellings used queries split word two parts different sizes instead however unequal splitting methods caused slower queries compared regular one regards hamming distance calculation turned naive implementation simply iterating comparing character fastest one compiler automatic optimization simply efficient implementations ones based directly sse instructions investigated string sketches string sketches introduced section allow faster string comparison since certain cases deduce two strings without performing explicit verification implementation sketch comparison requires performing one bitwise operation one array lookup constant operations total analyze comparison time two strings using various sketch types versus explicit verification sketch calculated per query reused comparison consecutive words examine situation single query compared dictionary words dictionary size speedup reported around words since case fewer words sketch construction slow relation comparisons experimental results sketch comparison decisive verification performed contributed elapsed time words generated english alphabet consult appendix order see letter frequencies sketch occupied bytes sketches effective figures contain results occurrence count sketches respectively consult appendix information regarding letter distribution alphabet comparison time sketches occurrence sketch common occurrence sketch mixed occurrence sketch rare word size figure comparison time word size mismatch using occurrence sketches words generated english alphabet sketch occupies bytes time refers average comparison time pair words common sketches use common letters rare sketches use least common letters mixed sketches use common least common letters note logarithmic experimental results comparison time sketches count sketch common count sketch mixed count sketch rare word size figure comparison time word size mismatch using count sketches words generated english alphabet sketch occupies bytes time refers average comparison time pair words common sketches use common letters rare sketches use least common letters mixed sketches use common least common letters note logarithmic chapter conclusions string searching algorithms ubiquitous computer science used common tasks performed home pcs searching inside text documents spell checking well industrial projects genome sequencing strings defined broadly usually contain natural language biological data dna proteins also represent various kinds data music images interesting aspect string matching diversity complexity solutions presented years theoretical practical despite simplicity problem formulation one common ones check pattern exists text investigated string searching methods preprocess input text construct data structure called index allows reduce time required searching often indispensable comes massive sizes modern data sets indexes divided ones operate whole input text answer arbitrary queries keyword indexes store dictionary individual corresponds words natural language dictionary dna reads key contributions include structure called modification compressed index trades space speed two variants described one using bits space log log log average query time one linear space log log average query time input text length pattern length experimentally show operating addition individual characters significant speedup achieved albeit cost high space requirements hence name bloated conclusions split index keyword index problem focus case performed better solutions hamming distance times order microsecond reported one mismatch natural language dictionary minor contribution includes string sketches aim speed approximate string comparison cost additional space per string future work presented results superlinear variant index order demonstrate potential capabilities multiple modifications implementations data structure introduced let recall store count table occurrence lists selected addition individual characters regular selection process store faster search index size grows well instance linear space version could augmented additional etc start position minimizer maximum gap size two minimizers would eliminate two phases search prefixes suffixes subsection individual characters used mechanism moreover comparison methods could augmented inverted index whose properties similar variants especially comes space requirements regards split index describe possible extensions subsections include using multiple threads introducing functionality inverted index moreover algorithm could possibly extended handle levenshtein distance well although would certainly substantial impact space usage another desired functionality could include dedicated support binary alphabet case individual characters could stored bits positive effect cache usage thanks data compaction possibly alignment cache line size appendix data sets following tables present information regarding data sets used work table describes data sets popular pizza chili corpus piz used indexes extracted table describes data sets used keyword indexes english dictionaries come linux packages webpage foster fos list common misspellings used queries obtained wikipedia typ dna dictionaries contain extracted genome drosophila melanogaster collected flybase database fly provided sizes refer size dictionary preprocessing keyword indexes duplicates well delimiters usually newline characters removed abbreviation refers natural language name source type english english english size table summary data sets used experimental evaluation indexes data sets name iamerican foster misspellings source linux package foster linux package wikipedia flybase flybase flybase flybase type english english english english dna dna dna dna size words table summary data sets used experimental evaluation keyword indexes appendix exact matching complexity theoretical analysis often mention exact string comparison determining whether must hold complexity operation equal characters compared two strings match hand average complexity depends alphabet instance probability characters match characters match well etc case uniform letter frequencies generally probability match characters position equal average number required comparisons equal derive following relation lim hence treating average time required exact comparison two random strings alphabet justified figure present relation average number comparisons value case english language alphabet context information form order entropy taken account simplified analysis let consider frequencies appendix probability two characters sampled random match equal etc proceeding manner probability match first pair characters equal first second pair etc regards empirical evaluation text average number comparisons random pair strings equal approximately exact matching complexity avg number comparisons alphabet size figure average number character comparisons comparing two random strings exact matching alphabet uniform letter frequency alphabet size appendix split index compression appendix presents additional information regarding compression split index consult subsection description data structure section experimental results figures see relation index size selection english alphabet clearly provided better compression dna index size count figure index size number used compression english dictionary used remaining split index compression index size count figure index size number used compression dna dictionary used remaining appendix string sketches section discussed use string sketches english alphabet could take advantage varying letter frequency present results alphabet uniform distribution instead selecting least common letters sketches contain information regarding occurrence count randomly selected letters see figure case sketches provide desired speedup sketches occurrence sketch count sketch comparison time word size figure comparison time word size mismatch using various string sketches generated alphabet uniform letter frequency sketch occupies bytes time refers average comparison time pair words note logarithmic appendix english letter frequency frequencies presented table used generation random queries letter distribution corresponded english use letter frequency letter frequency table frequencies english alphabet letters appendix hash functions table contains internet addresses hash functions used obtain experimental results split index section hash function listed means implementation used name city farm farsh superfast xxhash address https https https https http http https table summary internet addresses hash functions bibliography alfred aho margaret corasick efficient string matching aid bibliographic search communications acm stephen altschul bruce erickson optimal sequence alignment using affine gap costs bulletin mathematical biology stephen altschul warren gish webb miller eugene myers david lipman basic local alignment search tool journal molecular biology mohamed ibrahim abouelhoda stefan kurtz enno ohlebusch enhanced suffix array applications genome analysis algorithms bioinformatics pages springer mohamed ibrahim abouelhoda stefan kurtz enno ohlebusch replacing suffix trees enhanced suffix arrays journal discrete algorithms alexandr andoni robert krauthgamer krzysztof onak polylogarithmic approximation edit distance asymmetric query complexity foundations computer science annual ieee symposium pages ieee amihood amir moshe lewenstein ely porat faster algorithms string matching mismatches journal algorithms ngoc anh alistair moffat inverted index compression using wordaligned binary codes information retrieval mohamed ibrahim abouelhoda enno ohlebusch stefan kurtz optimal exact string matching based suffix arrays string processing information retrieval pages springer bibliography gregory bard tolerant via distance metric proceedings fifth australasian symposium acsw pages australian computer society horst bunke urs applications approximate string matching shape recognition pattern recognition djamal belazzougui fabiano botelho martin dietzfelbinger hash displace compress algorithms esa annual european symposium copenhagen denmark september proceedings pages djamal belazzougui fabio cunial detection unusual words arxiv preprint djamal belazzougui faster edit distance dictionary combinatorial pattern matching pages springer gerth brodal leszek approximate dictionary queries combinatorial pattern matching pages springer thomas bocek ela hunt burkhard stiller fabio hecht fast similarity search large dictionaries technical report department informatics university zurich switzerland bib holy bible king james version walter burkhard robert keller approaches file searching communications acm bla blast basic local alignment search tool http online accessed burton bloom hash coding allowable errors communications acm robert boyer strother moore fast string searching algorithm communications acm eric brill robert moore improved error model noisy channel spelling correction proceedings annual meeting association computational linguistics pages association computational linguistics bibliography andrei broder michael mitzenmacher network applications bloom filters survey internet mathematics alexander bowe multiary wavelet trees practice honours thesis rmit university australia boy leonid boytsov software http software online accessed leonid boytsov indexing methods approximate dictionary searching comparative analysis journal experimental algorithmics sergey brin lawrence page anatomy hypertextual web search engine computer networks isdn systems gerth brodal srinivasan venkatesh improved bounds dictionary one error information processing letters djamal belazzougui rossano venturini compressed string dictionary edit distance one combinatorial pattern matching pages springer michael burrows david wheeler lossless data compression algorithm technical report systems research center ricardo gaston gonnet new approach text searching communications acm ricardo gonzalo navarro text searching theory practice formal languages applications pages springer manolis christodoulakis gerhard brey edit distance singlesymbol combinations splits prague stringology conference pages ibrahim chegrane djamal belazzougui simple compact robust approximate string dictionary journal discrete algorithms clifford allyx fontaine ely porat benjamin sach tatiana starikovskaya problem revisited arxiv preprint bibliography aleksander szymon grabowski practical index proximate dictionary matching mismatches arxiv preprint surajit chaudhuri kris ganjam venkatesh ganti rajeev motwani robust efficient fuzzy match online data cleaning proceedings acm sigmod international conference management data pages acm richard cole gottlieb moshe lewenstein dictionary matching indexing errors cares proceedings thirtysixth annual acm symposium theory computing pages acm richard cole ramesh hariharan approximate string matching simpler faster algorithm siam journal computing che simple compact robust approximate string dictionary https online accessed maxime crochemore costas iliopoulos christos makris wojciech rytter athanasios tsakalidis tsichlas approximate string matching gaps nordic journal computing richard cole tsvi kopelowitz moshe lewenstein suffix trays suffix trists structures faster text indexing automata languages programming pages springer william chang jordan lampe theoretical empirical comparisons approximate string matching algorithms combinatorial pattern matching pages springer timothy chan moshe lewenstein fast string dictionary lookup one error combinatorial pattern matching pages springer david clark compact pat trees phd thesis university waterloo canada rayan chikhi antoine limasset shaun jackman jared simpson paul medvedev representation bruijn graphs research computational molecular biology pages springer bibliography thomas cormen charles leiserson ronald rivest clifford stein introduction algorithms mit press edition william chang thomas marr approximate string matching local similarity combinatorial pattern matching pages springer alejandro juan carlos moure antonio espinosa porfidio faster pattern matching procedia computer science francisco claude gonzalo navarro alberto efficient compressed wavelet trees large alphabets arxiv preprint francisco claude gonzalo navarro hannu peltola leena salmela jorma tarhio string matching alphabet sampling journal discrete algorithms archie cobbs fast approximate matching using suffix trees combinatorial pattern matching pages springer richard cole tight bounds complexity string matching algorithm siam journal computing standard programming language technical report shane culpepper matthias petri falk scholer efficient document retrieval proceedings international acm sigir conference research development information retrieval pages acm chung yufei tao wei wang dictionary search one edit error string processing information retrieval pages springer lorinda cherry william vesterman writing tools style diction programs technical report rutgers university usa lawrence carter mark wegman universal classes hash functions proceedings ninth annual acm symposium theory computing pages acm bibliography fred damerau technique computer detection correction spelling errors communications acm sebastian deorowicz context exhumation transform information processing letters gautam das rudolf fleischer leszek dimitris gunopulos juha episode matching combinatorial pattern matching pages springer george davida yair frankel brian matt enabling secure applications biometric identification security privacy proceedings ieee symposium pages ieee mrinal deo sean keely parallel suffix array least common prefix gpu acm sigplan notices volume pages acm sebastian deorowicz marek kokot szymon grabowski agnieszka kmc fast counting bioinformatics martin dietzfelbinger anna karlin kurt mehlhorn friedhelm meyer auf der heide hans rohnert robert tarjan dynamic perfect hashing upper lower bounds siam journal computing sean eddy alignment score matrix come nature biotechnology elliman lancaster review segmentation contextual analysis techniques text recognition pattern recognition bin fan dave andersen michael kaminsky michael mitzenmacher cuckoo filter practically better bloom proceedings acm international conference emerging networking experiments technologies pages acm martin farach optimal suffix tree construction large alphabets annual symposium foundations computer science focs miami beach florida usa october pages bibliography kimmo fredriksson szymon grabowski fast convolutions applications approximate string matching combinatorial algorithms pages springer paolo ferragina rodrigo gonzalo navarro rossano venturini compressed text indexes theory practice journal experimental algorithmics michael fredman endre storing sparse table worst case access time journal acm simone faro thierry lecroq exact online string matching problem review recent results acm computing surveys darren flower properties bit measures chemical similarity journal chemical information computer sciences fly flybase homepage http online accessed paolo ferragina giovanni manzini opportunistic data structures applications foundations computer science proceedings annual symposium pages ieee paolo ferragina giovanni manzini indexing compressed text journal acm paolo ferragina giovanni manzini veli gonzalo navarro compressed representations sequences indexes acm transactions algorithms fos http txt online accessed edward fredkin trie memory communications acm paolo ferragina rossano venturini compressed permuterm index acm transactions algorithms zvi galil improving worst case running time string matching algorithm communications acm bibliography eugene garfield permuterm subject index autobiographical review journal american society information science simon gog timo beller alistair moffat matthias petri theory practice plug play succinct data structures international symposium experimental algorithms sea pages roberto grossi ankur gupta jeffrey scott vitter text indexes proceedings fourteenth annual symposium discrete algorithms pages society industrial applied mathematics patrick girard christian landrault serge pravossoudovitch daniel severac reduction power consumption test application test vector ordering electronics letters szymon grabowski veli gonzalo navarro first man simple string processing information retrieval pages springer alessandra gabriele filippo mignosi antonio restivo marinella sciortino indexing structures approximate string matching algorithms complexity pages springer rodrigo gonzalo navarro dynamic compressed sequences applications theoretical computer science szymon grabowski gonzalo navarro przywarski alejandro salinger veli simple international journal foundations computer science travis gagie gonzalo navarro simon puglisi jouni relative compressed suffix trees arxiv preprint simon gog compressed suffix trees design construction applications phd thesis university ulm germany goo search works story http online accessed bibliography simon gog matthias petri optimized succinct data structures massive data software practice experience szymon grabowski marcin raniszewski sampling suffix array minimizers arxiv preprint szymon grabowski new algorithms exact approximate text matching zeszyty naukowe politechnika available http szymon grabowski new tabulation sparse dynamic programming based techniques sequence similarity problems proceedings prague stringology conference prague czech republic september pages szymon grabowski marcin raniszewski sebastian deorowicz fmindex dummies arxiv preprint ryan gregory genome size developmental complexity genetica dan gusfield algorithms strings trees sequences computer science computational biology cambridge university press roberto grossi jeffrey scott vitter compressed suffix arrays suffix trees applications text indexing string matching siam journal computing hongwei huo longgang chen heng zhao jeffrey scott vitter yakov nekrich qiang proceedings seventeenth workshop algorithm engineering experiments alenex san diego usa january pages harold stanley heaps information retrieval computational theoretical aspects academic press daniel hirschberg linear space algorithm computing maximal common subsequences communications acm nigel horspool practical fast searching strings software practice experience david huffman method construction minimum redundancy codes proceedings ire bibliography guy jacobson static trees graphs foundations computer science annual symposium pages ieee juha suffix cactus cross suffix tree suffix array combinatorial pattern matching pages springer karlsson beyond standard library introduction boost stefan kurtz jomuna choudhuri enno ohlebusch chris schleiermacher jens stoye robert giegerich reputer manifold applications repeat analysis genomic scale nucleic acids research juha dominik kempa simon puglisi hybrid compression bitvectors data compression conference pages ieee daniel karch dennis luxen peter sanders improved fast similarity search dictionaries string processing information retrieval pages springer donald knuth james morris vaughan pratt fast pattern matching strings siam journal computing donald knuth art computer programming volume addisonwesley jesse kornblum identifying almost identical files using context triggered piecewise hashing digital investigation richard karp michael rabin efficient randomized patternmatching algorithms ibm journal research development sandeep kumar eugene spafford pattern matching model misuse intrusion detection technical report department computer science purdue university usa juha esko ukkonen sparse suffix trees computing combinatorics pages springer stefan kurtz reducing space requirement suffix trees software practice experience bibliography kim whang lee ngram inverted index structure approximate string matching computer systems science engineering kim whang lee lee space time efficient inverted index structure proceedings international conference large data bases pages vldb endowment vladimir levenshtein binary codes capable correcting deletions insertions reversals soviet physics doklady volume pages robert lewand cryptological mathematics maa lin liu yinhu siliang yimin ray pong danni lin lihua maggie law comparison sequencing systems biomed research international david lipman william pearson rapid sensitive protein similarity searches science gad landau jeanette schmidt dina sokol algorithm approximate tandem repeats journal computational biology ben langmead cole trapnell mihai pop steven salzberg ultrafast alignment short dna sequences human genome genome biology gad landau uzi vishkin fast parallel serial approximate string matching journal algorithms veli compact suffix array combinatorial pattern matching pages springer tyler moore benjamin edelman measuring perpetrators funders typosquatting financial cryptography data security pages springer moshe mor aviezri fraenkel hash code method detecting correcting spelling errors communications acm bibliography giovanni manzini paolo ferragina engineering lightweight suffix array construction algorithm algorithmica alistair moffat simon gog string search experimentation using massive data philosophical transactions royal society london mathematical physical engineering sciences aleksandr morgulis michael gertz alejandro richa agarwala fast symmetric dust implementation mask lowcomplexity dna sequences journal computational biology melichar jan holub polcar text searching algorithms department computer science engineering czech technical university prague czech republic roger mitton spelling checkers spelling correctors misspellings poor spellers information processing management gurmeet singh manku arvind jain anish das sarma detecting web crawling proceedings international conference world wide web pages acm udi manber gene myers suffix arrays new method string searches siam journal computing moritz johannes nowak text indexing errors combinatorial pattern matching pages springer veli gonzalo navarro succinct suffix arrays based encoding combinatorial pattern matching pages springer donald morrison patricia practical algorithm retrieve information coded alphanumeric journal acm marvin minsky seymour papert perceptrons mit press cambridge massachusetts michael maniscalco simon puglisi efficient versatile approach suffix sorting journal experimental algorithmics svetlin manavski giorgio valle cuda compatible gpu cards efficient hardware accelerators sequence alignment bmc bioinformatics suppl bibliography udi manber sun algorithm approximate membership checking application password security information processing letters udi manber sun glimpse tool search entire file systems usenix winter pages gene myers fast algorithm approximate string matching based dynamic programming journal acm gonzalo navarro guided tour approximate string matching acm computing surveys gonzalo navarro ricardo erkki sutinen jorma tarhio indexing methods approximate string matching ieee data engineering bulletin alexandros ntoulas junghoo cho pruning policies inverted index correctness guarantee proceedings annual international acm sigir conference research development information retrieval pages acm gonzalo navarro veli compressed indexes acm computing surveys nong practical suffix sorting constant alphabets acm transactions information systems gonzalo navarro mathieu raffinot fast simple character classes bounded gaps pattern matching applications protein searching journal computational biology nicholas nethercote julian seward valgrind program supervision framework electronic notes theoretical computer science saul needleman christian wunsch general method applicable search similarities amino acid sequence two proteins journal molecular biology christos ouzounis alfonso valencia early bioinformatics birth discipline personal view bioinformatics james peterson computer programs detecting correcting spelling errors communications acm bibliography james peterson note undetected typing errors communications acm alan parker hamblen james computer algorithms plagiarism detection ieee transactions education piz pizza chili corpus compressed indexes testbeds http online accessed victor pankratius ali jannesari walter tichy parallelizing case study multicore software engineering software ieee simon puglisi william smyth andrew turpin taxonomy suffix array construction algorithms acm computing surveys joseph pollock antonio zamora automatic spelling correction scientific scholarly text communications acm michael rabin fingerprinting random polynomials technical report department mathematics hebrew university jerusalem israel ran http online accessed michael roberts wayne hayes brian hunt stephen mount james yorke reducing storage requirements biological sequence comparison bioinformatics stuart russell peter norvig artificial intelligence modern approach prentice hall edition rajeev raman venkatesh raman srinivasa rao succinct indexable dictionaries applications encoding trees multisets proceedings thirteenth annual symposium discrete algorithms pages society industrial applied mathematics david salomon data compression complete reference springer science business media bibliography jared simpson richard durbin efficient construction assembly string graph using bioinformatics sds https online accessed sew julian seward http online accessed claude elwood shannon mathematical theory communication bell systems technical journal steven skiena algorithm design manual volume springer science business media eugene shpaer max robinson david yee james candlin robert mines tim hunkapiller sensitivity selectivity protein similarity searches comparison hardware blast fasta genomics erkki sutinen jorma tarhio using locations approximate string matching algorithms esa third annual european symposium corfu greece september proceedings pages temple smith michael waterman identification common molecular subsequences journal molecular biology fei shi peter widmayer approximate multiple string searching clustering genome informatics nathan tuck timothy sherwood brad calder george varghese deterministic string matching algorithms intrusion detection infocom annual joint conference ieee computer communications societies volume pages ieee dekel tsur fast index approximate string matching journal discrete algorithms alan turing computing machinery intelligence mind bibliography typ lists common misspellings http wikipedia online accessed esko ukkonen algorithms approximate string matching information control esko ukkonen finding approximate patterns strings journal algorithms esko ukkonen construction suffix trees algorithmica uni uniprot http online accessed sebastiano vigna broadword implementation queries experimental algorithms pages springer peter weiner linear pattern matching algorithms switching automata theory swat ieee conference record annual symposium pages ieee dan willard range queries possible space information processing letters lusheng wang tao jiang complexity multiple sequence alignment journal computational biology sun udi manber gene myers subquadratic algorithm approximate limited expression matching algorithmica andrew yao frances yao dictionary small errors combinatorial pattern matching pages list symbols block either unit piece bit vector character string length count table cache line size distance metric time required calculating two strings alphabet dictionary keywords keyword indexes word string dictionary enc encoded compressed word first column bwt matrix hash function order entropy string hash table hamming weight number bit vector ham hamming distance bie number rounded nearest integer index string matching number errors approximate matching last column bwt matrix load factor list symbols lev levenshtein distance pattern size input size occ number occurrences pattern set minimizers string pattern piece word case word partitioning probability event collection string string sketch substring set substrings indexes suffix array alphabet set strings alphabet alphabet size input string text rrr table bwt input text applying bwt size machine word typically bits bit vector list abbreviations bloom filter load factor algorithm algorithm bmh algorithm mphf minimal perfect hash function bst binary search tree natural language bwt transform algorithm csa compressed suffix array ocr optical character recognition cosa compact suffix array pizza chili corpus dfs search pies partitioning exact searching dynamic programming algorithm esa enhanced suffix array suffix array fsm finite state machine suffix cactus tlb translation lookaside buffer kmp algorithm suffix tree lcp longest common prefix algorithm lcs longest common subsequence wavelet tree list figures binary search tree bst storing strings english alphabet trie one basic structures used string searching hash table strings formula shannon entropy calculating alignment levenshtein distance using wunsch algorithm suffix tree stores suffixes given text suffix array stores indexes sorted suffixes given text compressed suffix array csa stores indexes pointing next suffix text calculating transform bwt relation bwt count table part formulae updating range search procedure fmindex example rrr blocks example rrr table wavelet tree extraction structure superlinear space constructing phrases use minimizers selecting overlapping given text bloom filter approximate membership queries inverted index stores mapping words split index keyword indexing selecting minimizers given text query time per character pattern length english text size query time per character pattern length different methods english text size query time index size load factor split index query time index size dictionary size without coding english dictionaries split index query time index size dictionary size without coding dna dictionaries split index query time index size different methods split index positions list figures comparison time word size mismatch using occurrence sketches words generated english alphabet comparison time word size mismatch using count sketches words generated english alphabet average number character comparisons comparing two random strings alphabet uniform letter frequency alphabet size index size number used compression english dictionary split index index size number used compression dna dictionary split index comparison time word size mismatch using various string sketches generated alphabet uniform letter frequency list tables comparison complexities basic data structures used exact string searching algorithm classification based whether data preprocessed evaluated hash functions search times per query split index query time index size error value split index summary data indexes summary data keyword indexes sets used sets used experimental experimental evaluation evaluation frequencies english alphabet letters summary internet addresses hash functions | 8 |
nov artin approximation property general neron desingularization dorin popescu abstract exposition general neron desingularization applications end recent constructive form desingularization dimension one key words artin approximation neron desingularization conjecture quillen question smooth morphisms regular morphisms smoothing ring morphisms mathematics subject classification primary secondary introduction let field khxi ring algebraic power series algebraic closure polynomial ring formal power series ring let solution completion theorem artin exists solution mod general say local ring artin approximation property every system polynomials solution completion exists solution mod fact artin approximation property every finite system polynomial equations solution solution completion mention artin proved already ring convergent power series coefficients artin approximation property later called ring morphism noetherian rings regular fibers prime ideals spec ring regular ring localizations regular local rings geometrically regular fibers prime ideals spec finite field extensions fraction field ring regular flat morphism noetherian rings regular fibers geometrically regular regular finite type called smooth localization smooth algebra called essentially smooth gratefully acknowledge support project granted romanian national authority scientific research cncs uefiscdi henselian noetherian local ring excellent completion map regular example henselian discrete valuation ring excellent completion map induces separable fraction field extension theorem artin let excellent henselian discrete valuation ring hxi ring algebraic power series algebraic closure polynomial ring formal power series ring hxi artin approximation property proof used called desingularization says unramified extension valuation rings inducing separable field extensions fraction residue fields filtered inductive union essentially finite type subextensions regular local rings even essentially smooth desingularization extended following theorem theorem general neron desingularization popescu teissier swan spivakovski let regular morphism noetherian rings finite type factors smooth composite smooth given theorem called general neron desingularization note uniquely associated better speak general neron desingularization theorem gives positive answer conjecture artin theorem excellent henselian local ring artin approximation property paper survey artin approximation property general neron desingularization applications relies mainly lectures given within special semester artin approximation chaire jean morlet cirm luminy spring see http artin approximation properties first show one recovers theorem theorem indeed let finite system polynomial equations solution set let morphism given theorem factors smooth composite thus changing may reduce problem case smooth since local may assume jacobian matrix thus invertible implicit function theorem exists modulo following consequence theorem noticed hinted radu origin interest read theorem write later corollary let regular morphism noetherian rings differential module flat proof note theorem follows filtered inductive limit smooth filtered inductive limit last modules free modules definition noetherian local ring strong artin approximation property every finite system polynomial equations exists map following property satisfies modulo exists solution modulo greenberg proved excellent henselian discrete valuation rings strong artin approximation property linear case theorem artin algebraic power series ring field strong artin approximation property note general linear showed following theorem conjectured artin theorem let excellent henselian discrete valuation ring ahxi ring algebraic power series ahxi strong artin approximation property theorem see also noetherian complete local rings strong artin approximation property particular strong artin approximation property artin approximation property thus theorem follows theorem theorem gives excellent henselian local rings strong artin approximation property easy direct proof fact given using theorem ultrapower methods converse implication theorem clear henselian artin approximation property hand reduced artin approximation property reduced indeed nonzero satisfies choosing get modulo follows contradicts hypothesis easy see local ring finite module artin approximation property follows artin approximation property called reduced formal fibers particular must called universally japanese ring using also strong artin approximation property possible prove given system polynomial equations another one sentence exists holds holds provided artin approximation property way proved artin approximation property normal domain normal domain actually starting point quoted paper later cipu used fact show formal fibers called geometrically normal domains artin approximation property finally rotthaus proved excellent artin approximation property next let excellent henselian local ring completion mcm resp mcm set isomorphism classes maximal cohen macaulay modules resp assume isolated singularity maximal module free punctured spectrum since also isolated singularity see map mcm given surjective theorem elkik theorem theorem bijective proof let two finite may suppose ukj vrj ukj vrj canonical basis let map defined invertible xij respect induces bijection maps onto exist ykr zrk ykr zrk note pthat pequivalent uki xij ykr vrj zrk uki xij vrj therefore exist det uki vrj uki vrj artin approximation property exists solution let say xij ykr zrk xij ykr zrk modulo follows det xij det modulo corollary hypothesis theorem mcm indecomposable indecomposable proof assume mcm surjectivity get mcm injectivity gives remark henselian corollary false example let indecomposable mcm decomposable indeed remark let called induces also inclusion see remark known mcm finite simple singularity complex unimodal singularity certainly case mcm infinite maybe exists special property characterizes unimodal singularities purpose would necessary describe somehow mcm least special cases small attempts done andreas steenpass cases need artin approximation property enough apply artin theorem sometimes might need special kind artin approximation called artin approximation nested subring condition namely following result also considered possible artin theorem theorem let field khxi khx integers suppose solution xsi exists solution xsi mod corollary weierstrass preparation theorem holds ring algebraic power series field proof let khxi algebraic power series weierstrass preparation theorem associated dip visibility monic polynomial xpm thus system xpm solution theorem exists solution khxi congruent modulo previous one thus invertible xpm unicity formal weierstrass preparation theorem follows see theorem useful get algebraic versal deformations see let khzi kht deformation kht flat denotes henselization local ring condition says form kht modulo says tora local flatness criterion since ideal separated local noetherian let part free resolution map given says tensorizing sequence get exact sequence flat therefore equivalent exists kht zid modulo modulo would like construct versal deformation see pages deformation exists morphism structural map given replace algebraic power series formal power series problem solved schlessinger infinitesimal case followed theorems elkik artin set assume already versal frame complete local rings get versal property frame algebraic power series let since versal frame complete local rings exists structure given assume given modulo hand may suppose induces isomorphism given modulo ideals coincide thus exists invertible theorem may find khuin khu zis cij khu satisfying cij modulo note det cij det modulo cij invertible follows given wanted one structure given next give idea proof theorem particular essential case proposition let field khxi khx integers suppose solution exists solution mod proof note excellent henselian artin approximation property thus system polynomials solution modulo enough apply following lemma lemma let excellent henselian local ring completion henselizations respectively system polynomials positive integers suppose solution exists solution mod proof union etale neighborhoods take etale neighborhood monic polynomial let say high enough note changing necessary may suppose mod actually take fraction easier expression skip denominator substitute yijk divide monic polynomial zjk yij yijk zjk new variables get fpjk yijk zjk mod solution solution fpjk artin approximation property may choose solution yijk zjk fpjk coincides modulo former one yijk together form solution etale neighborhood zjk contained clearly wanted solution applications conjecture let polynomial algebra regular local ring extension serre problem proved quillen suslin following conjecture every finitely generated projective module free theorem lindel conjecture holds essentially finite type field swan unpublished notes lindel paper see proposition contain two interesting remarks lindel proof works also essentially finite type dvr local parameter conjecture holds regular local ring containing field providing following question positive answer question swan regular local ring filtered inductive limit regular local rings essentially finite type indeed suppose example contains field filtered inductive limit regular local rings essentially finite type prime field finitely generated projective extension finitely generated projective theorem get free free theorem swan question holds regular local rings one following cases contains field characteristic excellent henselian proof suppose contains field may assume prime field perfect field inclusion regular theorem filtered inductive limit smooth morphisms thus regular ring finite type therefore filtered inductive limit regular local rings essentially finite type similarly may treat first assume complete cohen structure theorem may also assume factor complete local ring type prime integer see filtered inductive limit regular local rings essentially finite type since regular local rings see part regular system parameters exists system elements certain mapped limit map follows part regular system parameters regular local rings enough see filtered inductive limit next assume excellent henselian let completion using enough show given finite type inclusion factors regular local ring essentially finite type exists composite map filtered inductive limit regular local rings composite map factors regular local ring essentially finite type may choose finite type spec map factors excellent regular locus reg open exists regular ring changing may assume regular let let polynomials since may write modulo polynomials note exists factors artin approximation property theorem exists let map given clearly factors may take precisely following diagram commutative except right square roo corollary conjecture holds regular local ring one cases theorem remark theorem complete answer question says positive answer expected general since exists result similar lindel saying conjecture holds regular local rings essentially finite type decided wait research waited already years another problem replace conjecture polynomial algebra tool given following theorem theorem vorst let ring polynomial algebra monomial ideal every finitely generated projective extended finitely generated projective every finitely generated projective extended finitely generated projective corollary let regular local ring one cases theorem monomial ideal finitely generated projective free proof apply theorem using corollary conjecture could also hold regular following corollary shows corollary let regular local ring one cases theorem monomial ideal every finitely generated projective free result holds factor monomial ideal remark monomial conjecture may fail replacing indeed exist finitely generated projective rank one free see let regular local ring question quillen free finitely generated projective module theorem quillen question positive answer essentially finite type field theorem quillen question positive answer contains field goes similarly corollary using theorem instead theorem remark paper accepted publication many journals since referees said relies theorem theorem still recognized mathematical community since paper quoted unpublished preprint published later romanian bulletin noticed quoted many people see instance general neron desingularization using artin methods ploski gave following theorem first form possible extension neron desingularization dim theorem let convergent power series solution map given factors type variables composite map using theorem one get extension theorem theorem let excellent henselian local ring completion finite type factors type variables henselization suppose system polynomials ideal generated denote jacobian matrix elkik let radical sum taken systems polynomials ideal spec essentially smooth jacobian criterion smoothness thus measures non smooth locus linear case may easily get cases theorem dim lemma let ring weak regular sequence divisor divisor let flat set radical factors polynomial one variable proof note solutions multiples flatness solution linear combinations solutions multiple let map given factors composite map first map given second one proposition lemma let aij system linear homogeneous polynomials complete system solutions let solution let flat factors polynomial variables proof let map given since flat see linear combinations exists therefore factors first map given second one another form theorem following theorem positive answer conjecture artin theorem let regular morphism noetherian rings finite type spec open smooth locus exist smooth two smooth spec spec induced exists also form theorem recalling strong artin approximation property theorem let noetherian local ring completion map regular finite type artin function associated system polynomials defining exists function every positive integer every morphism exists smooth two morphisms composite map sometimes may find information let discrete valuation ring local parameter completion finite type system polynomials consider jacobian matrix let suppose exists simplicity write instead theorem theorem exists smooth every modulo modulo factors corollary theorem assumptions notation corollary exists canonical bijection homa modulo let field finite type let say arc spec spec given assume happens example reduced perfect set let system polynomials jacobian matrix let assume exists note induces bijection homk homa adjunction corollary corollary set homk modulo bijection affine space next give possible extension greenberg result strong artin approximation property let local ring example reduced ring dimension one completion finite type suppose exists jacobian matrix may construct general neron desingularization idea theorem could used get following theorem theorem popescu exists modulo modulo moreover also excellent henselian exists modulo remark theorem could extended noetherian local rings dimension one see case statement depends also reduced primary decomposition using end section algorithmic attempt explain proof theorem frame noetherian local domains dimension one let flat morphism noetherian local domains dimension suppose maximal ideal generates maximal ideal regular morphism moreover suppose exist canonical inclusions essentially finite type ideal computed singular following definition easier describe ideal defined case considered algorithmic part let say variables completion defined polynomials problem easy let field obtained adjoining coefficients subring containing essentially smooth may take standard smooth localization consequently suppose usually may suppose indeed induces amorphism may replace applying trick several times reduce case however fraction field essentially smooth separability worst case trick change several steps choose system polynomials moreover may choose set let map given follows modulo replace jacobian matrix new given thus reduce case get computer polynomial defined algorithm complicated able tell computer get may choose element find minimal possible dim set follows certainly find precisely later enough know kind truncation modulo thus may suppose exist system polynomials jacobian matrix modulo may assume det set clearly regular morphism artinian local rings easy find general neron desingularization frame thus exists smooth factors moreover may suppose polynomials note smooth factors usually factor though factors let composite map given thus modulo modulo modulo thus certain modulo let obtained adding border block let adjoint matrix nmidn idn dsidn idn set new variables since modulo modulo higher order terms taylor formula see maxi deg modulo idr set may take localization remark algorithmic proof frame noetherian local rings dimension one given references cinq exposes sur desingularisation handwritten manuscript ecole polytechnique federale lausanne artin solutions analytic equations invent artin algebraic approximation structures complete local rings publ math ihes artin constructions techniques algebraic spaces actes congres intern artin versal deformations algebraic stacks invent artin algebraic structure power series rings contemp math ams artin denef smoothing ring homomorphism along section arithmetic geometry vol boston basarab nica popescu approximation properties existential completeness ring morphisms manuscripta math bhatwadeckar rao question quillen trans amer math cipu popescu extensions neron approximation rev roum math pures cipu popescu desingularization theorem neron type ann univ ferrara decker greuel pfister singular computer algebra system polynomial elkik solutions equations coefficients dans anneaux henselien ann sci ecole normale greenberg rational points henselian discrete valuation rings publ math ihes grothendieck dieudonne elements geometrie algebrique part publ math ihes kashiwara vilonen microdifferential systems conjecture ann kurke mostowski pfister popescu roczen die approximationseigenschaft lokaler ringe springer lect notes york lam serre conjecture springer lect notes berlin lindel conjecture concerning projective modules polynomial rings invent modeles minimaux des varietes abeliennes sur les corps locaux globaux publ math ihes pfister popescu die strenge approximationseigenschaft lokaler ringe inventiones math pfister popescu constructive general neron desingularization one dimensional local rings preparation ploski note theorem artin bull acad polon des xxii popescu popescu method compute general neron desingularization frame one dimensional local domains arxiv popescu strong approximation theorem discrete valuation rings rev roum math pures popescu algebraically pure morphisms popescu general neron desingularization nagoya math popescu general neron desingularization approximation nagoya math popescu polynomial rings projective modules nagoya math popescu letter editor general neron desingularization approximation nagoya math popescu artin approximation handbook algebra vol hazewinkel elsevier popescu variations desingularization sitzungsberichte der berliner mathematischen gesselschaft berlin popescu question quillen bull math soc sci math roumanie popescu around general neron desingularization arxiv popescu roczen indecomposable modules irreducible maps compositio math quillen projective modules polynomial rings invent rond sur fonction artin ann sci ecole norm rotthaus rings property approximation math spivakovski new proof popescu theorem smoothing ring homomorphisms amer math steenpass algorithms singular parallelization syzygies singularities phd thesis kaiserslautern swan desingularization algebra geometry kang international press cambridge teissier resultats recents sur approximation des morphismes algebre commutative apres artin popescu spivakovski sem bourbaki vorst serre problem discrete hodge algebras math dorin popescu simion stoilow institute mathematics romanian academy research unit university bucharest bucharest romania address | 0 |
rank three geometry positive curvature jul fuquan fang karsten grove gudlaugur thorbergsson abstract axiomatic characterization buildings type due tits used prove cohomogeneity two polar action type positively curved simply connected manifold equivariantly diffeomorphic polar action rank one symmetric space includes two actions cayley plane whose associated type geometry covered building rank size coxeter matrix coincides number generators associated coxeter system basic objects tits local approach buildings chamber systems type see also indeed spherical residue subchamber system rank covered building recall polar action riemannian manifold isometric action socalled section immersed submanifold meets orbits orthogonally since action identity component polar well assume throughout without stating connected key observation fgt study polar actions positively curved manifolds essence study certain class connected chamber systems moreover universal tits cover building structure compact spherical building sense burns spatzier bsp utilized fgt show theorem polar action cohomogeneity least two simply connected closed positively curved manifold equivariantly diffeomorphic polar action rank one symmetric space associated chamber system type note action fixed points rank dim one cohomogeneity action theorem cayley plane emerges cohomogeneity two fixed points moreover indeed chamber systems type whose universal cover building see fgt case polar action type orbit space geodesic angles aim take care exceptional case prove theorem polar action simply connected positively curved manifold type equivariantly diffeomorphic polar action rank one symmetric space first author supported part nsfc grant grateful university notre dame hospitality second author supported part nsf grant research chair hausdorff center university bonn humboldt research award third author grateful university notre dame capital normal university beijing hospitality fuquan fang karsten grove gudlaugur thorbergsson includes two actions cayley plane universal covers associated chamber systems buildings combining results course establishes corollary polar action cohomogeneity least two simply connected closed positively curved manifold equivariantly diffeomorphic polar action rank one symmetric space stark contrast case cohomogeneity one dimensions seven thirteen infinitely many manifolds even homotopy classification work gwz also lead discovery construction new example positively curved manifold see gvz necessity indicated proof theorem entirely different proof theorem general geometric realization chamber systems utilized proof theorem simplicial however fgt proved fact theorem geometric realization chamber system type associated simply connected polar simplicial geometric realization chamber system type simplicial called tits geometry type allows use axiomatic characterization geometries buildings see proposition rather considering universal cover directly construct two cases suitable cover possibly prove satisfies building axiom tits two cases methods fails recognized equivalent two type polar actions cayley plane pth note since chamber systems homogeneous type tits geometries independent alternate proof theorem follows preliminaries purpose section threefold explaining overall approaches strategies needed proof theorem recall basic concepts establish notation throughout denotes compact connected lie group acting closed connected positively curved manifold polar fashion type fix chamber section action isometric orbit spaces reflection group acts simply transitively chambers since action type convex positively curved geodesic sides faces opposite vertices angles respectively reconstruction theorem recall polar manifold completely determined polar data case data consist isotropy groups together inclusions along chamber also lemma denote principal isotropy group isotropy groups vertices opposite faces respectively remains removing data referred local data action rank three geometry positive curvature two exceptions turns partial data needed show action indeed equivalent polar action rank one symmetric space since data two exceptional cases coincide exceptional actions cayley plane complete proof theorem addition worth noting since groups derived data maximal connected subgroups identity component isometry group cayley plane actions uniquely determined turn polar proof theorem two exceptional cases based showing universal cover chamber system associated polar action spherical tits building fgt homogeneous chamber system union chambers three adjacency relations one face specifically adjacent respective faces chamber system thin topology induced path metric simplicial complex theorem hence geometry indicated fundamental theorem tits used fgt show building yields nothing rank three chamber systems well rank three geometries instead show cover construct building hence simply connected verifying axiomatic incidence characterization see section buildings due also tits construction chamber system covers utilize equivalent context principal bundle construction theorem coxter polar actions manifolds specifically case given data data consists graphs compatible homomorphisms particular local data isomorphic local data clearly acts freely group automorphisms chamber system covering case one case basic tools obstructions aim section establish number properties restrictions data used throughout unless otherwise stated compact connected lie group closed simply connected positively curved manifold without curvature assumptions possibly well known lemma orbit equivalence let simply connected polar manifold slice representation isotropy group orbit equivalent identity component proof recall slice representation isotropy group restricted orthogonal complement fixed point set inside normal space orbit polar representation clearly finite group acts isometrically orbit space isometric chamber polar action sphere since convex boundary soul point unique point maximal distance fuquan fang karsten grove gudlaugur thorbergsson boundary fixed soul point however corresponds principal orbit hence exceptional orbit unless acts trivially however theorem exceptional orbits polar action simply connected manifold subsequently talking casually slice representation refer slice representation identity component unless otherwise stated using positive curvature following basic fact derived fgt theorem lemma primitivity group generated identity components face isotropy groups fixed chamber naturally slice representations play fundamental role denote respective kernels representations quotients since particular slice representation type follows multiplicity triple polar manifold dimensions unit spheres normal slices along edges kernels usually large groups lemma slice kernel let simply connected polar type acts effectively kernel respectively acts effectively slices respectively proof note fixes sections since acts trivially slice must prove consider since arguments remaining cases similar note since assumed act effectively contained principal isotropy group suffices prove normal primitivity see quotient homomorphism identity component similarly thus suffices show normal case assuming effective vertex isotropy group connected alter proof simplifies notation accordingly proceed assume connected show normal subgroup note normal subgroup acting trivially slices assumption quotient map surjective restricted identity component finite central cover isomorphic product locally isomorphic locally isomorphic identity component covering particular contains connected closed subgroup commutes cover map moreover every element subgroup conjugation gives rise elements hand every element automorphism group aut since normal hence defines homomorphism aut since trivial image aut forgetful homomorphism aut aut group finite hence trivial connected implies elements commute elements rank three geometry positive curvature normal follows normal since hkt subgroup mentioned arguments show normal case connected arguments also show normal remark turns cases connected fact automatic whenever since acts transitively projective plane local isomorphism identity component one groups corresponding respectively slice representation standard polar representation type see also table view transversality lemma connected whenever case connectedness lemma implies also case connected see proposition following simple topological consequence transversality combined fact canonical deformation retraction orbit space triangle minus side opposite vertex lifts alternatively work wie also used frequently lemma transversality given multiplicity triple inclusion maps min connected recall continuos map said connected induced map ith homotopy groups isomorphism surjection another connectivity theorem theorem using positive curvature synge powerful lemma wilking let positively curved totally geodesic closed codimension submanifold inclusion map connected addition fixed isometric action compact lie group principal orbit dimension inclusion map connected conclude section two severe restrictions stemming positive curvature first follow well known synge type fact isometric action orbits dim odd dimensions even dimensions positive curvature particular since maximal rank among isotropy groups euler characteristic page conclude lemma rank lemma dimension even otherwise rank adapting wilking isotropy representation lemma positively curved manifolds polar manifolds type obtain lemma sphere transitive subrepresentations let simple normal subgroup irreducible isotropy subrepresentation isomorphic standard defining representation particular acts transitively sphere fuquan fang karsten grove gudlaugur thorbergsson proof let irreducible isotropy subrepresentation isomorphic summand slice representation isomorphic summand isotropy representation vertex isotropy group hand almost effective factor well understood tables standard defining representation desired result follows building axiom recall tits provided axiomatic characterization buildings irreducible type geometric realization thin topology associated chamber system simplicial complex characterization given terms incidence geometry associated purpose section describe characterization translate context definition vertices incident denoted contained closed chamber clearly incidence relation equivalence relation preserved action case describe needed characterization use following standard terminology shadow vertex set vertices type denoted shi union vertices type incident following tits call vertices type points lines planes respectively denote set points lines planes notice acts transitively terminology axiomatic characterization proposition proof case alluded states theorem axiom connected tits geometry type building following axiom holds two lines incident two different points coincide equivalently shq shq cardinality least two shr shr cardinality one case incident clearly equivalent proceed interpret terms isotropy groups data used either directly suitably constructed cover described end section notational simplicity describe general case see remark rank three geometry positive curvature proposition building type following holds pair different points incident grq grq denotes isotropy group unique edge theorem proof note every line orbit incident axiom implies orbit contains one line hence since building grq desired result follows see condition together assumption suitable reduction action implies building type describe reduction let line let normal sphere summand slice shadow exp moreover isotropy group acts transitively let denote identity component kernel transitive action clear fixed point connected component containing cohomogeneity one submanifold identity component normalizer corresponding chamber system denoted subcomplex inherits incidence structure gives rise tits geometry rank lemma reduction connected chamber system type building reduction holds proof axiom two points incident two different lines know grq therefore configuration contained fixed point set since definition clearly subgroup implies length circuit building contradiction following technical criterion useful lemma building criterion connected chamber system building reduction following property holds lie group grq normalizer contained grq either proof previous lemma suffices verify suppose true pair points incident subgroup grq let assumption grq however grq since building particular length circuit building contradiction remark cover constructed note property inherited likewise group graph fuquan fang karsten grove gudlaugur thorbergsson homomorphism restricted satisfies property note construction local data reduction isomorphic local data follows proofs component reduction corresponding component building covering main result theorem fgt applies remark subgroup assumption building criterion may replaced fixed point component building rank building latter notice theorem rank spherical building cat space hence two points distance less joined unique geodesic clearly excludes length circuit proof since perimeter remark note clearly similarly kernels vertex edge isotropy groups particular identity component identity component kernel acting consequently reduction cohomogeneity two manifold type either containing cohomogeneity one manifold classification outline organization subsequent sections devoted proof following main result paper theorem let compact simply connected positively curved polar associated chamber system type universal cover building equivariantly diffeomorphic one exceptional polar actions combined main result fgt proves theorem introduction purpose section describe proof organized according four types scenarios driven possible compatible types slice representations vertices chamber common feature scenario cases determination local data basic input indeed knowledge slice representations vertices chamber lemma local data identifies desired reduction cohomogeneity one action referred building criteria lemma property essentially automatic main difficulty establish corresponding reduction cover construction local data building first step frequently uses following consequence classification work positively curved cohomogeneity one manifolds gwz lemma simply connected positively curved cohomogeneity one manifold multiplicity pair different equivariantly diffeomorphic rank one symmetric space already pointed used four possible effective slice representations particular forcing codimensions orbit strata corresponding rank three geometry positive curvature table respectively singular respectively principal isotropy groups effective slice representation restricted unit sphere codimensions singular orbits psu spin spin spin table effective representations similarly see table identity component possible effective type slice representations compatible multiplicity restrictions table known well see table gwz corrected error exceptional spin representation see also gkk main theorem even odd even odd spin table effective representation aside exceptional representations isotropy representations grassmannians pairs multiplicities fuquan fang karsten grove gudlaugur thorbergsson occur exceptional representations corresponding spin note effectively four exceptional slice representations corresponding last four rows table however special situations occur also slice representation isotropy representation real grassmann manifold multiplicity happen refer flips may expected low multiplicity cases play important special roles latter two exceptional cayley plane emerges cases complete information polar data required accordingly organized proof four sections depending type slice representations along three grassmann flips three grassmann series two non minimal two minimal grassmann representations four exceptional representations grassmann flip slice representation section deal multiplicity cases leaving minimal odd section following common features lemma isotropy groups connected reducible slice representation standard action kernels slice representations proof transversality lemma implies orbits simply connected since particular connected since second claim follows since even appendix fgt description reducible polar representations since table well act effectively respective normal spheres see also since kernel lemma hence recall identity component kernel action restricted lemma clearly acts transitively corresponding normal sphere kernel identity component moreover hence injective reduction positively curved irreducible cohomogeneity one manifolds multiplicity pair proof note acts trivially second claim follows since since injective see table hence cohomogeneity one multiplicity pair complete proof assume contradiction action reducible action equivalent sum action isotropy cases easy see center intersects center nontrivial subgroup together primitivity implies rank three geometry positive curvature center notice subgroup factor acts freely unit sphere slice thus fixed point set coincides orbit classification positively curved homogeneous spaces get immediately product one orthogonal groups unitary groups big enough contain simple group desired result follows although remains spirit flip cases cary arguments case individually beginning proposition flip case covering building isotropy representation linear model proof lemma tables obtain following information local data spin spin spin spin spin spin also lemma lemma see corresponding reduction tensor product representation type induced easily seen assumption lemma satisfied well particular associated chamber system building type lemma conclude building latter two cases use bundle construction polar actions obtain free covering guided knowledge cohomogeneity one diagrams data cohomogeneity one manifolds proceed follows note since simple groups trivial homomorphism exists let graphs projection homomorphisms denote total space corresponding principal bundle polar manifold covers let graph choice data follows hopf bundle bundle former case building done lemma via latter case action reduction primitive connected however connected component building hence corresponding component building covering combined previous section turn shows lens space simply connected proposition flip case covering building isotropy representation linear model proof lemma tables obtain following information local data modulo common kernel spin spin spin case lemma lemma see corresponding reduction linear tensor product fuquan fang karsten grove gudlaugur thorbergsson representation type induced easily seen assumption lemma satisfied well particular associated chamber system building type lemma conclude building lens space proceed bundle construction trivial homomorphism exists choose graphs projection homomorphisms denote total space corresponding principal bundle polar manifold covers choice data follows hopf bundle bundle proof completed proposition flip case covering building isotropy representation linear model proof begin verifying earlier claim see connected also case already know hence connected slice representation product action singular isotropy group along away origin hence isotropy group hand suppose connected psu particular slice representation along psu acting acts complex conjugation contradicting tables yield following information local data modulo kernel moreover factor face isotropy group lemma lemma see corresponding reduction linear tensor product representation type induced assumption lemma easily checked hold particular conclude building latter two cases guided reduction bundle construction choice let graph homomorphism defined sending det det graph projection homomorphism yields compatible choice data polar action principal bundle whose corresponding chamber system free cover choice data follows hopf bundle bundle proof completed remark tensor representation polar polar projective space hand necessary construction rank three geometry positive curvature covering factors since face isotropy groups subgroups hence compatible homomorphism trivial face isotropy groups non minimal grassmann series slice representation recall three infinite families cases corresponding real complex quaternion grassmann series slice representation point special two ways two scenarios one corresponding flip case covered previous subsection standard yet standard appear reduction general cases case two scenarios well local data one belonging family moreover cases admit reduction flip case whereas reasons provided subsection deal multiplicity cases uniform treatment although case significantly different general cases treated begin pointing common features cases including case describe information local data uniform fashion use denote according exceptional convention depending whether center finite also use symbol mean isomorphic finite connected covering lemma cases connected moreover additional possibility vertex isotropy groups moreover normal subgroup block subgroup denotes nontrivial extension particular proof connectedness claim direct consequence transversality proof follows strategy cases simpler vertex isotropy groups conneceted two possibilities correspond different rank possibilities table reasons provide proof subtle case first notice effective slice representation type principal isotropy group hence extension kernel hand table possible quotient diagonal center even therefore also extension together lemma implies hence particular conclude similarly acting normal sphere principal isotropy group thus since fuquan fang karsten grove gudlaugur thorbergsson get easily since hand subgroup hence contains exactly two connected components whose identity component follows rest proof straightforward note contains normal subgroup fact reduction action identity component normalizer give geometry type play important role cases follows consider reduction rather one lemma cohomogeneity one manifold multiplicity pair action equivalent reducible cohomogeneity one action proof simplicity give proof cases first note orbit space cohomogeneity one two singular isotropy groups mod kernel respectively principal isotropy group hence multiplicity pair prove reducible argue contradiction indeed equivariantly diffeomorphic product action follows normal subgroup also normal primitivity hgr hence normal hand face isotropy group contains subgroup sits therefore projection homomorphism epimorphism however since sits must trivial homomorphism trivial contradiction immediately much help since several positively curved irreducible cohomogeneity one manifolds multiplicity pair tables gwz whose associated chamber system type however respectively corresponding multiplicity pairs respectively read classification gwz corollary universal covering equivariantly diffeomorphic linear action type ready deal family individually beginning standard case almost effective slice representation defining tensor product representation proposition standard case associated chamber system building isotropy representation linear model proof lemma normal subgroup principal isotropy group consider reduction action normalizer polar action section lemma clear identity component hence subaction identity component type rank three geometry positive curvature right angle therefore classification geometries section fgt immediate universal cover equivariantly difffeomorphic linear action particular section chamber complex subaction building type done remark since property clearly satisfied remains prove simply connected consider normal subgroup fixed point component homogeneous manifold positive curvature dimension least two since dimension since identity component isotropy group see according equivalently according argue contradiction identity connected component normalizer acts transitively principal isotropy group hence contradiction since proposition standard case chamber system covered building isotropy representation linear model proof first note reduction positively curved cohomogeneity two manifold type multiplicity triple moreover block subgroup course prove reductions simply connected appealing nectivity lemma wilking proceed prove codimm codimm spherical isotropy lemma every irreducible isotropy subrepresentation defining representation table gwz fact follows simple normal subgroup projects block subgroup finally one exceptional lie groups hand flip proposition normalizer either modulo since block subgroup together implies fact one factor exist particular representation along contains exactly copies one copy along normal slice two copies along orbit therefore codimension hence codimension connectivity lemma wilking conclude induction particular simply connected hence dim odd dim even flip proposition since assumption lemma satisfied conclude building dim odd remains prove covered building dim even case know hand transversality lemma follows hence contains least center lemma get least factor situation proof lemma consequence fuquan fang karsten grove gudlaugur thorbergsson proceed construction principal bundle conclude associated chamber system building covering proposition standard case chamber system building isotropy representation linear model proof since assumption lemma easily seen satisfied suffices corollary prove simply connected proof general case achieved via wilkings connectivity lemma consider normal subgroup clear homogeneous space transitive action identity component normalizer isotropy group classification positively curved homogeneous spaces get either moreover universal cover particular rank rank lemma hand lemma table gwz follows contains normal subgroup isomorphic chain block subgroups finite cover let hand corollary know together information implies proof case see isotropy representation along contains exactly three copies one copy along normal slice two copies along orbit particular codimension recalling dimension follows connectivity induction simply connected minimal grassmann slice representation section deal multiplicity cases including appearance exceptional cayley plane action previous cases reductions considered irreducible polar actions however encounter reductions reducible cohomogeneity two actions rely independent classification actions sections fgt begin case know universal covering reduction diffeomorphic first two scenarios follow outline general case whereas latter significantly different proposition case multiplicities covered building isotropy representation linear model provided diffeomorphic proof lemma either depending whether finite latter case reduction positively curved cohomogeneity two manifold type multiplicity triple general case therefore flip proposition desired result follows proof proposition rank three geometry positive curvature assume finite kernel correspondingly moreover assumption duction corollary normalizer contains semisimple part hand rank lemma know resp dim odd resp even particular maximal rank subgroup case immediate borel siebenthal see table page simple group rank similarly claim simple group rank indeed lemma table gwz would follow block subgroup however possible since would contain thus nontrivial lie groups without loss generality assume projection nontrivial image must contained otherwise normalizer would much smaller primitivity easy see diagonally imbedded since hgt hgt particular rank least two since projections almost imbeddings finite kernel rank two easy see neither scenario possible latter since primitivity former semisimple part therefore lemma table gwz note dimm principal orbit dimension least lar follows wilkings connectivity simply connected thus general case desired result follows lemma proposition case multiplicities equivariantly diffeomorphic cayley plane isometric polar action provided diffeomorphic proof recall lemma slice representation follows every irreducible subrepresentation normal space standard representation particular codimension multiple dimension divisible isotropy group correspondingly rank lemma lemma isotropy representations well spherical transitive table gwz follows simple group rank moreover contain normal subgroup since semisimple part would contradiction assumption reduction hand note identity component normalizer since maximal isotropy group hence acts freely positively curved fixed point set even dimension therefore contain normal subgroup since otherwise would block subgroup hence would trivial consequently simple group moreover diagonally imbedded particular contain subgroups easy see subgroup either say hence follows fuquan fang karsten grove gudlaugur thorbergsson furthermore neither group rank since otherwise contains rank semisimple group hence latter however impossible indeed case center would contained hence every principal isotropy groups center invariant conjugation thus summary proved indeed quotient group diagonally imbedded claim combined analysis isotropy groups modulo conjugation force polar data noting face isotropy groups intersections vertex isotropy groups upper block subgroup product lower block subgroups words recognition theorem polar actions one polar action hand unique action maximal subgroup isometry group cayley plane indeed polar type pth prove claim conjugation may assume claimed moreover conjugation element face isotropy group may assume lower block subgroup second factor note normal subgroup indeed second factor since follows product lower block subgroups since hsu principal isotropy group desired assertion follows next deal case multiplicity two scenarios one naturally viewed part infinite family whereas viewed flip case point unlike cases chamber system cover arises first case corresponding polar action proposition multiplicity case chamber system covered building isotropy representation either linear model proof recall first claim identity component see recall kernel either claim follows since dim nontrivial contradiction lemma also conclude since otherwise hence isomorphic either standard case fold covering flip case rank lemma follows rank start following observation let cyclic subgroup principal isotropy group image action reduction reducible polar action cohomogeneity see note type orbit reduction longer vertex indeed normalizer rank three geometry positive curvature addition note identity component every face isotropy group dual generation lemma fgt conclude semisimple part rank one proceed prove simple group rank direct consequence combined following algebraic fact rank simple group one center normalizer order subgroup contains semisimple subgroup rank least algebraic fact easily established noticing inclusion map either lifted homomorphism one four matrix groups sits quotient image diagonally imbedded one matrix groups next going prove rank group either equivariant diffeomorphism exactly case exclude since subgroup normalizer containing exclude exceptional group otherwise must contained either center reason finally orbit reduction contains dual generation lemma fgt impossible since identity component isotropy group face opposite circle act transitively orbit therefore local isomorphism respectively one checks corresponding isotropy group data given respectively inclusion induced field homomorphism block subgroup recognization theorem yields suppose rank lie group acts freely acts polar fashion type hence even dimensional thus either case know universal cover chamber system building since connected chamber system covering follows universal cover consider remaining case act freely let cyclic group note since would simple group factor absurd particular part rank least two thus may assume rank two group moreover argument case immediate fact either notice trivial polar manifold section type connectivity lemma follows simply connected hence classification geometries diffeomorphic chamber system building building fuquan fang karsten grove gudlaugur thorbergsson therefore may assume following hence follows either split rest proof according abelian either case note normalizer get immediately appealing suffices prove action free since situation reduces previous rank case note normal follows neither see would contain normal subgroup contradicting table proof case similar simpler hence either orbit assuming immediate list positively curved homogeneous spaces hand notice connected indeed follows simply connected however totally geodesic submanifold dimension contradiction wilking connectivity lemma assuming corresponding universal cover sphere dimension either latter case ruled follows center hence also center impossible since acts effectively assumption nontrivial homomorphisms hence impossible former case action equivalent standard linear action spherical space form kernel thus contradiction cii simple rank one group either show case local data forcing data coincide isotropy representation hence action determined via recognition first prove start observation moreover diagonally imbedded subgroup indeed otherwise order element normalizer contains rank semisimple subgroup contradicting reason see hence similarly must diagonally embedded impossible since finite finally given follows hence since sits diagonally follows sits diagonally particular using arguments see together isotropy data determined exceptional slice representation section deal remaining cases exceptional multiplicities latter occur case include exceptional action cayley plane rank three geometry positive curvature proposition case multiplicities effective slice representation tensor representation either equivariantly diffeomorphic cayley plane isometric polar action building tensor product representation spin linear model proof transversality lemma conclude connected since simply connected kernel normal subgroup well principal isotropy group quotients respectively table slice lemma acts effectively combining table follows identity components thus spin latter however impossible since quaternion group order hand table slice representation natural tensor representation center kernel contradiction therefore consequently lemma case assume lemma dimm even table page subgroup rank simple group therefore rank one group table face isotropy group diagonally embedded follows composition homomorphism nontrivial hence surjective onto hence since proper nontrivial normal lie subgroup quotient already know diagonal subgroup given epimorphism monomorphism clear conjugation standard upper block matrices subgroup proof proposition claim one polar action data since dealing non classical lie group however proceed follows given another type polar action isomorphic local data along chamber vertices without loss generality may assume moreover since two subgroups conjugate moreover assume since singular isotropy groups pair slice representation unique conjugation particular principal isotropy groups prove clearly implies assertion since generated recall composition projection monomorphism composition hence diagonal subgroup whose projection factor injective hence suffices show projection images coincide hand note projection image normalizer identity component principal isotropy group assertion follows fuquan fang karsten grove gudlaugur thorbergsson existence note maximal subgroup isometry group cayley plane corresponding unique isometric action indeed polar proved type case assume lemma dimm odd consider reduction action identity component normalizer note also type polar action multiplicity triple appealing lemma codimension divisible thus case follows universal cover identity component either modulo kernel going prove simply connected suffices show follows trivially connectivity lemma wilking codimension normal subgroup rank group isomorphic hence easy count codimension see strictly less normal subgroup lemma isotropy representation spherical transitive hence contains normal simple lie subgroup spin spherical claim spin contains spin spin block subgroup spin hence contains spin contradicts proves spin rank group get isotropy subrepresentation contains exactly three copies standard defining representation hence desired estimate codimension summary conclude hence multiplicity case chamber system action building type remark conclude building proposition polar action type type multiplicities effective slice representation tensor product representation spin proof prove slice representation chamber system building desired claim follows classification buildings indeed building proceed note table spin principal isotropy group follows local isomorphism notice reduction cohomogeneity section clear type since vertex vertex angle classification geometries follows either claim hence chamber system building appealing follows building see claim suffices prove orientable hence simply connected thanks positive curvature isotropy representation defining complex representation immediate hence oriented maximal torus rank three geometry positive curvature proposition multiplicity triple two scenarios either case building linear model adjoint polar representation either proof lemma know vertex isotropy groups connected notice table slice representation adjoint representation together proposition local isomorphism local isotropy group data determined follows moreover let consider reduction polar manifold section reduction notice face multiplicity face exceptional normal sphere therefore action reducible fundamental chamber reflection image exceptional orbit type particular multiplicities hence slice representation adjoint representation clearly implies fixed point hand notice orientable hence simply connected therefore theorem fgt know since property holds follows remark building remark remark proof chamber system building type one proposition case multiplicities chamber system covered building isotropy representation linear model proof lemma know isotropy groups connected note lemma easy see semisimple local isomorphism subgroup semisimple isotropy groups data product corresponding data prove contains normal subgroup lemma isotropy representations spherical normal factors face isotropy groups hence normal factor either table gwz moreover subgroup contained block subgroup resp block subgroup resp since contains follows resp resp rule former case consider fixed point set polar action clearly reducible cohomogeneity action vertex angle dual generation lemma fgt follows either fixed point case product face isotropy group opposite reduction immediate note semisimple dim even rank rank lemma hence remaining case dim odd fuquan fang karsten grove gudlaugur thorbergsson prove local isomorphism indeed clear rank hence rank group suffices prove let clear projection trivial restricted either primitivity hgt hgt therefore hence complete proof split two cases dim even odd former clear subgroup normalizer forces isotropy groups data linear cohomogeneity polar action induced isotropy representation hence particular chamber system covered building latter depending fixed point set odd dimensional since isotropy representation defining complex representation note equivariantly diffeomorphic standard linear cohomogeneity one action type hence lemma building references alexandrino singular riemannian foliations simply connected spaces differential geom appl borel siebenthal les fermes rang maximum des groupes lie clos comment math helv bsp burns spatzier topological tits buildings classification inst hautes sci publ math charney lytchak metric characterizations spherical euclidean buildings geom topol dearricott positive curvature duke math eschenburg heintze classification polar representations math fgt fang grove thorbergsson tits geometry positive curvature preprint gorodski kollross remarks polar actions preprint gozzi low dimensional polar actions arxiv gvz grove verdiani ziller exotic positive curvature geom funct anal gwz grove wilking ziller positively curved cohomogeneity one manifolds geometry differential geom grove ziller polar actions manifolds journal fixed point theory applications gkk knarr kramer compact connected polygons geom dedicata hopf samelson ein satz die geschlossener liescher gruppen comment math helv kramer lytchak homogeneous compact geometries transform groups lytchak polar foliations symmetric spaces geom funct anal neumaier sporadic geometries related pgl arch math pth thorbergsson polar actions symmetric spaces differential geom ronan lectures buildings perspectives mathematics academic press boston rank three geometry positive curvature wie sugahara isometry group diameter riemannian manifold positive curvature math japon tits buildings spherical type finite lecture notes mathematics springerverlag york tits local approach buildings geometric vein coxeter festschrift edited davis sherk springer new verdiani cohomogeneity one manifolds even dimension strictly positive sectional curvature differential wiesendorf taut submanifolds foliations differential geom wilking nonnegatively positively curved manifolds surveys differential geometry vol surv differ geom int press somerville wilking positively curved manifolds symmetry ann math wilking torus actions manifolds positive sectional curvature acta math department mathematics capital normal university beijing china address fuquan fang department mathematics university notre dame notre dame usa address mathematisches institut weyertal germany address gthorber | 4 |
rejection mitigation time synchronization attacks global positioning system feb ali khalajmehrabadi student member ieee nikolaos gatsis member ieee david akopian senior member ieee ahmad taha member ieee paper introduces time synchronization attack rejection mitigation tsarm technique time synchronization attacks tsas global positioning system gps technique estimates clock bias drift gps receiver along possible attack contrary previous approaches estimated time instants attack clock bias drift receiver corrected proposed technique computationally efficient easily implemented real time fashion complementary standard algorithms position velocity time estimation receivers performance technique evaluated set collected data real gps receiver method renders excellent time recovery consistent application requirements numerical results demonstrate tsarm technique outperforms competing approaches literature index positioning system time synchronization attack spoofing detection ntroduction nfrastructures road tolling systems terrestrial digital video broadcasting cell phone air traffic control towers industrial control systems phasor measurement units pmus heavily rely synchronized precise timing consistent accurate network communications maintain records ensure traceability global positioning system gps provides time reference microsecond precision systems systems use civilian gps channels open public unencrypted nature signals makes vulnerable unintentional interference intentional attacks thus unauthorized manipulation gps signals leads disruption correct readings time references thus called time synchronization attack tsa address impact malicious attacks instance pmu data electric power research institute published technical report recognizes vulnerability pmus gps spoofing scenario gps time signal compromise attacks introduce erroneous time stamps eventually equivalent inducing wrong phase angle authors electrical computer engineering department university texas san antonio san antonio usa pmu measurements impact tsas generator trip control transmission line fault detection voltage stability monitoring disturbing event locationing power system state estimation studied evaluated experimentally simulations intentional unauthorized manipulation gps signals commonly referred gps spoofing categorized based spoofer mechanism follows jamming blocking spoofer sends high power signals jam normal operation receiver disrupting normal operation victim receiver often referred loosing lock victim receiver may lock onto spoofer signal jamming data level spoofing spoofer manipulates navigation data orbital parameters ephemerides used compute satellite locations signal level spoofing spoofer synthesizes signals carry navigation data concurrently broadcasted satellites attack spoofer records authentic gps signals retransmits selected delays higher power typically spoofer starts low power transmission increases power force receiver lock onto spoofed delayed signal spoofer may change transmitting signal properties victim receiver miscalculates estimates common gps receivers lack proper mechanisms detect attacks group studies directed towards evaluating requirements successful attacks theoretically experimentally instance work designed real spoofer software defined radio sdr records authentic gps signals retransmits fake signals provides option manipulating various signal properties spoofing spoofing detection techniques literature first level countermeasures reduce effect malicious attacks gps receivers typically relies receiver autonomous integrity monitoring raim gps receivers typically apply raim consistency checks detect anomalies exploiting measurement redundancies example raim may evaluate variance gps solution residuals consequently generate alarm exceeds predetermined threshold similar variance authentication techniques proposed table gps poofing etection echniques etection omain mplementation spects method ekf cusum attack detection domain gps navigation domain gps baseband signal domain attack estimated estimated ref gps baseband power grid domains estimated spree ref ref ref ref tsarm gps baseband signal domain gps baseband signal domain gps navigation domain gps navigation domain gps navigation domain gps navigation domain estimated estimated estimated estimated estimated estimated based hypothesis testing kalman filter innovations however vulnerable smarter attacks pass raim checks innovation hypothesis testing plethora countermeasures designed make receivers robust sophisticated attacks vector tracking exploits signals satellites jointly feedbacks predicted position velocity time pvt internal lock loops attack occurs lock loops become unstable indication attack cooperative gps receivers perfrom authentication check analyzing integrity measurements communications also quick sanity check stationary time synchronization devices monitor estimated location true location known priori large shift exceeds maximum allowable position estimation error indication attack receiver receiver used indicator spoofing attack difference ratios two gps antennas proposed metric pmu trustworthiness addition approaches compare receiver clock behavior statistics normal operation existing literature gaps discussed prior research studies addressed breadth problems related gps spoofing however certain gaps still addressed works provide analytical models different types spoofing attacks possible attacking procedure models crucial designing countermeasures spoofing attacks although countermeasures might effective certain type attack comprehensive countermeasure development still lacking defending gps receiver practically needed receiver predict type attack main effort literature detection possible spoofing attacks however even spoofing detection gps receiver resume normal operation especially pmu applications network normal operation interrupted spoofing countermeasures detect attacks also mitigate effects network resume normal operation need simpler solutions integrated current systems implementation aspects benchmark common gps receivers applies hypothesis testing packets received signal combines statistics ratio difference two gps antennas applies auxiliary peak tracking correlators receiver applies vector tracking loop needs collaboration among multiple gps receivers applies particle filter applies hypothesis testing gps clock signature applies optimization technique relevant yes yes yes contributions work work addresses previously mentioned gaps stationary time synchronization systems best knowledge first work provides following major contributions new method mere spoofing detector also estimates spoofing attack spoofed signatures clock bias drift corrected using estimated attack new method detects smartest attacks maintain consistency measurement set descriptive comparison solution representative works literature provided table review spoofing detection domain shows prior art operates baseband signal processing domain necessitates manipulation receiver circuitry hence approach present paper compared works whose detection methodology lies navigation domain proposed tsa detection mitigation approach paper consists two parts first dynamical model introduced analytically models attacks receiver clock bias drift proposed novel time synchronization attack rejection mitigation tsarm approach clock bias drift estimated along attack secondly estimated clock bias drift modified based estimated attacks receiver would able continue normal operation corrected timing application proposed method detects mitigates effects smartest consistent reported attacks position victim receiver altered attacks pseudoranges consistent attacks pseudorange rates different outlier detection approaches proposed method detects anomalous behavior spoofer even measurement integrity preserved spoofing mitigation scheme following desirable attributes solves small quadratic program makes applicable commonly used devices easily integrated existing systems without changing receiver circuitry necessitating mulitple gps receivers opposed run parallel current systems provide alert spoofing occurred without halting normal operation system corrected timing estimates computed proposed technique evaluated using commercial gps receiver measurements access measurements perturbed spoofing attacks specific pmu operation applying proposed technique shows clock bias receiver corrected within maximum allowable error pmu ieee standard paper organization brief description gps described section provide models possible spoofing attacks section iii section elaborates proposed solution detect modify effect attacks solution numerically evaluated section followed conclusions section gps pvt stimation section brief overview gps position velocity time pvt estimation presented main idea localization timing gps trilateration relies known location satellites well distance measurements satellites gps receiver particular gps signal satellite contains set navigation data comprising ephemeris almanac typically updated every hours one week respectively together signal time transmission data used compute satellite position earth centered earth fixed ecef coordinates function known gps receiver let denote time signal arrives gps receiver distance user gps receiver satellite found multiplying signal propagation time speed light quantity called pseudorange number visible satellites pseudorange exact distance receiver satellite clocks biased respect absolute gps time let receiver satellite clock biases denoted respectively therefore time reception related absolute values gps time follows tgps tgps computed received navigation data considered known however bias must estimated subtracted measured yield receiver absolute gps time tgps used time reference used synchronization synchronization systems time stamp readings based coordinated universal time utc known offset gps time tutc tgps available let coordinates gps receiver true range satellite distance expressed via locations times tgps tgps kpn tgps tgps therefore measurement equation becomes kpn represents noise unknowns therefore measurements least four satellites needed estimate https accessed furthermore nominal carrier frequency mhz transmitted signals satellite experiences doppler shift receiver due relative motion receiver satellite hence addition pseudoranges pseudorange rates estimated doppler shift related relative satellite velocity user velocity via kpn clock drift cases four visible satellites resulting overdetermined system equations typical gps receivers use nonlinear weighted least squares wls solve provide estimate location velocity clock bias clock drift receiver often referred pvt solution additionally exploit consecutive nature estimates dynamical model used conventional dynamical model stationary receivers random walk model chap time index time resolution typically sec noise dynamical system measurement equations basis estimating user pvt using extended kalman filter ekf previous works shown simple attacks able mislead solutions wls ekf stationary gpsbased time synchronization systems currently equipped mode option potentially detect attack gps position differs known receiver location maximum allowed error used first indication attack advanced spoofers ones developed ability manipulate clock bias drift estimates stationary receiver without altering position velocity latter zero even ekf conventional dynamical models perturbations pseudoranges pseudorange rates designed directly result clock bias drift perturbations without altering position velocity receiver iii odeling ime ynchronization attacks section puts forth general attack model encompasses attack types discussed literature model instrumental designing technique discussed next section tsas different physical mechanisms manifest attacks pseudorange pseudorange rates attacks modeled direct perturbations time time fig type attack pseudorange pseudorange rate versus local observation time spoofing perturbations pseudoranges pseudorange rates respectively respectively spoofed pseudorange pseudorange rates typical spoofer follows practical considerations introduce feasible attacks considerations formulated follows attack meaningful infringes maximum allowed error defined system specification instance pmu applications attack exceed maximum allowable error tolerance specified ieee standard total variation error tve equivalently expressed phase angle error clock bias error bias error hand cdma cellular networks require timing accuracy due peculiarities gps receivers internal feedback loops may loose lock spoofed signal spoofer signal properties change rapidly designed spoofers ability manipulate clock drift manipulating doppler frequency clock bias manipulating code delay perturbations applied separately however smartest attacks maintain consistency spoofer transmitted signal means pertubations pseudoranges integration perturbations pseudorange rates distinguishing two attack procedures advantageous literature includes research reports technical intricacies spoofer constraints type spoofer manipulates authentic signal bias abruptly changes short time fig illustrates attack attack pseudoranges suddenly appears perturbs pseudoranges equivalent attack pseudorange rates dirac delta function type spoofer gradually manipulates authentic signals changes clock bias time attack modeled http accessed fig type attack pseudorange pseudorange rate versus local observation time respectively called distance equivalent velocity distance equivalent acceleration attack maintain victim receiver lock spoofer signals attack exceed certain distance equivalent velocity two limiting numbers reported literature namely acceleration reach maximum spoofing velocity reported spoofer acceleration random makes type attack quite general distance equivalent velocity converted equivalent bias change rate dividing velocity speed light fig illustrates attack attack pseudoranges starts perturbs pseudoranges gradually distance equivalent velocity exceeding maximum distance equivalent random acceleration satisfying introduced attack models quite general mathematically capture attacks victim receiver measurements pseudoranges pseudorange rates discussed section another words type type attacks result data level spoofing signal level spoofing attack combination aformentioned attacks main difference type type attacks spoofing speed speed attack depends capabilities spoofer respect manipulating various features gps signals indeed attacks different speeds reported literature provided earlier present section work deal jamming disrupts navigation functionality completely whereas spoofing misleads next section dynamical model clock bias drift introduced incorporates attacks based dynamical model optimization problem estimate attacks along clock bias drift proposed dynamical odel tsa ejection itigation section introduces dynamical model accommodate spoofing attack method estimate attack afterwards procedure approximately nullifing effects attack clock bias drift introduced novel dynamical model modeling attack pseudoranges pseudorange rates motivated attack types discussed previous section attacks alter position velocity clock bias clock drift model follow conventional dynamical model stationary receivers allows position receiver follow random walk model instead known position velocity victim receiver exploited jointly state vector contains clock bias clock drift attacks explicitly modeled components leading following dynamical model cbu cbu csb cwb attacks clock bias clock drift colored gaussian noise samples covariance function defined chap sides multiplied typically adopted convention state noise covariance matrix particular crystal oscillator device similarly define measurement equation cbu kpn explicit modeling indicates dynamical model benefits using stationary victim receiver known position velocity latter zero measurement noise covariance matrix obtained measurements receiver detailed explanation obtain state measurement covariance matrices provided section noted state covariance depends victim receiver clock behavior change spoofing however measurement covariance matrix experiences contraction reason ensure victim receiver maintains lock fake signals spoofer typically applies power advantage real incoming gps signals victim receiver front end comparing tsas alter position velocity transfer attack pseudoranges pseudorange rates directly clock bias clock drift thus holds csb attack detection let define time index within observation window length running time index solution dynamical model obtained stacking measurements forming following optimization problem kyl hxl argmin fxl estimated states estimated attacks regularization coefficient total variation matrix forms variation signal time first term weighted residuals measurement equation second term weighted residuals state equation last regularization term promotes sparsity total variation estimated attack clock bias clock drift estimated jointly attack model two introduced attacks considered type attack step attack applied pseudoranges solution clock bias equivalently experiences step attack time kdsl indicates rise tracks significant differences two subsequent time instants magnitude estimated attack two adjacent times change significantly total variation attack close zero otherwise presence attack total variation attack includes spike attack time type attack total variation attack show significant changes attack magnitude small beginning sparsity evident initially although explained meaningful expect nonzero entries total variation attacks general necessary condition capturing attacks initial small total variation magnitudes means explicit modeling attacks estimation require attacks exhibit sparsity total variation furthermore bias bias drift corrected using estimated attack provide one mechanism section sparsity total variation appears subsequent time instants time instants attack appears prominent effect low dynamic behavior attack magnified fact facilitates attack detection also verified numerically effect direct consequence correction scheme discussed next section optimization problem boils solving simple quadratic program specifically epigraph trick convex optimization used transform linear constraints observation window slides lag time tlag set tlag realtime operation next section details sliding window operation algorithm elaborates use solution order provide corrected bias drift state correction observation window length estimated attack used compensate impact attack clock bias clock drift measurements revisiting attack model bias time depends clock bias clock drift time dependence successively traces back initial time therefore attack bias occurred past accumulated time similar observation valid clock drift clock bias time therefore contaminated cumulative effect attack clock bias clock drift previous times correction method takes account previously mentioned effect modifies bias drift subtracting cumulative outcome clock bias drift attacks follows respectively corrected clock bias respectively corrected pseudorange clock drift pseudorange rates one vector length first observation window tlag observation windows afterwards ensures measurements states doubly corrected corrected measurements used solving next observation window overall attack detection modification procedure illustrated algorithm receiver collects measurements problem solved based estimated attack clock bias clock drift cleaned using process repeated sliding window clock bias drift time instants cleaned previously corrected another words duplication modification states proposed technique boils solving simple quadratic program variables thus performed real time example efficient implementations quadratic programming solvers readily available lowlevel programming languages implementation technique gps receivers electronic devices thus straightforward necessitate creating new libraries umerical esults first describe data collection device assess three representative detection schemes literature fail detect tsa attacks attacks mislead clock bias clock drift maintaining correct location velocity estimates performance detection modification technique attacks illustrated afterwards algorithm tsa rejection mitigation tsarm set true batch construct compute details provided section estimate via assign assign modify via first window tlag windows afterwards set output tutc user time stamping slide observation window setting tlag end gps data collection device set real gps signals recorded google nexus tablet university texas san antonio june ground truth position obtained taking median wls position estimates stationary device device recently equipped gps chipset provides raw gps measurements android application called gnss logger released along matlab codes google android location team interest two classes package provides gps receiver clock properties provides measurements gps signals accuracies obtain pseudorange measurements transmission time subtracted time reception function getreceivedsvtimenanos provides transmission time signal respect current gps week midnight signal reception time available using function gettimenanos translate receiver time gps time gps time week package provides difference device clock time gps time function getfullbiasnanos receiver clock covariance matrix dependent statistics device clock oscillator following data available https gps https accessed https accessed bias ekf normal ekf spoofed time drift ekf normal ekf spoofed time fig effect type attack ekf particle filter clock bias clock drift attack started panel include drift attack consistent attack inconsistent attack attack started attack started time fig performance hypothesis testing based statistic type attack different false alarm probabilities attack inconsistent attack consistent attack model typically adopted select chap calculating measurement covariance matrix uncertainty pseuodrange pseudorange rates used uncertainties available device together respective experiments set distance magnitudes tens thousands meters estimated clock bias drift ekf normal operation considered ground truth subsequent analysis follows reported times local failure prior work detecting consistent attacks section demonstrates three relevant approaches table may fail detect consistent attacks attacks integral performance ekf particle filter subject type attack reported first perturbations gps measurements fig used input ekf particle filter attack starts fig depicts effect attack clock bias drift ekf dynamical model blindly follows attack short settling time particle filter estimates clock bias assumes clock drift known wls similarly ekf particle filter able detect consistent spoofing attack maximum difference receiver estimated position obtained ekf type attack normal operation xdiff ydiff zdiff position estimate thus considerably altered attack third approach evaluated proposed monitors statistics receiver clock typical spoofing detection technique considering gps receivers compute bias regular intervals particular approach estimate gps time time epochs confirm time elapsed indeed hend following statistic bepformulated tgp tgp test statistic normally distributed mean zero attack may nonzero mean depending attack demonstrated shortly variance needs estimated samples normal operation detection procedure relies statistical hypothesis testing false alarm probability defined corresponds threshold compared chap receiver considered attack result method shown fig different false alarm probabilities fig depicts system attack time signature lies thresholds low false alarm probabilities system detect attack case inconsistent type attack integration perturbations pseudorange rates pseudoranges attacked fig shows attack detected right away however smart attacks spoofer maintains consistency pseudorange pseudorange rates fig illustrates signature fails detect attack example shows statistical behavior clock remain untouched smart spoofing attacks addition even attack detected previous methods provide estimate attack spoofing detection type attack fig shows result solving using gps measurements perturbed type attack fig spoofer capability attack signal short time clock bias experiences jump estimated total variation bias attack renders spike right attack time modification procedure corrects clock bias using estimated attack bias spoofed bias normal attack started attack started attack started normal attacked normal attacked normal attacked attack started normal attacked normal attacked normal attacked attack started attack started bias modified bias normal fig result attack detection modification type attack attack started top bottom normal clock bias blue spoofed bias red estimated bias attack total variation estimated bias attack true bias blue modified bias magenta time fig comparison normal pseudorange change spoofed pseudoranges change normal pseudorange rates spoofed pseudorange rates type attack visible satellites attack started bias normal attacked attack started mbias drift bias normal attacked attack started time fig result attack detection modification type attack started top bottom normal clock bias blue spoofed bias red total variation estimated bias attack total variation estimated drift attack true bias blue modified bias magenta time bias modified bias normal attack started bias spoofed bias normal bias drift mbias drift bias bias attack started mbias bias attack started bias spoofed bias normal attack started attack started attack started bias modified bias normal time spoofing detection type attack fig result attack detection modification type attack top bottom normal clock bias blue spoofed bias red estimated bias attack total variation estimated bias attack true bias blue modified bias magenta impact type attack pseudoranges pseuodrange rates shown fig specifically fig illustrates normal spoofed pseudorange changes respect initial value visible satellites receiver view fig depicts corresponding pseudorange rates tag end line indicates satellite whether pseudorange pseudorange rate corresponds normal operation operation attack spoofed pseudoranges diverge quadratically starting following type attack type attack algorithm implemented sliding window tlag fig shows attacked clock bias starting since attack magnitude small initial times spoofing neither estimated attack total variation show significant values procedure sliding window correct current clock bias clock drift times modified previously hence first run estimates whole window modified fig shows estimated attack corresponding total variation one tlag obvious figure modification previous clock biases transforms low dynamic behavior spoofer large jump facilitates detection attack total variation component clock bias drift modified previous time instants need cleaned present work set gps signals obtained actual gps receiver real environment attacks simulated based characteristics real spoofers reported literature experimentation behavior proposed detection mitigation approach real spoofing scenarios subject future research rmse eferences fig rmse tsarm various values tlag analysis results let total length observation time experiment root mean square error rmse introduced rmse shows average error clock bias output spoofing detection technique estimated clock bias ekf normal operation considered ground truth comparing results estimated spoofed bias ekf normal bias shows rmseekf error antispoofing particle filter rmsepf applied tsarm clock bias modified maximum error rmsetsarm fig illustrates rmse tsarm range values window size lag time tlag observation window smaller fewer measurements used state estimation hand exceeds number states estimated grows although measurements employed estimation numerical results illustrate models clock bias drift attacks effectively subsequently estimated using corrected oncluding emarks uture ork work discussed research issue time synchronization attacks devices rely gps time tagging measurements two principal types attacks discussed dynamical model specifically models attacks introduced attack detection technique solves optimization problem estimate attacks clock bias clock drift spoofer manipulated clock bias drift corrected using estimated attacks proposed method detects behavior spoofer even measurements integrity preserved numerical results demonstrate attack largely rejected bias estimated within true value lies within standardized accuracy pmu cdma applications proposed method implemented operation office electricity delivery energy reliability http accessed official government information global positioning system gps related topics http parkinson spilker axelrad enge global positioning system theory applications american institute aeronautics astronautics vol global positioning system theory applications american institute aeronautics astronautics vol misra enge global positioning system signals measurements performance press lincoln yaesh shaked hinf inity estimation application improved tracking gps receivers ieee trans ind vol mar reliable fusion positioning strategy land vehicles environments based sensors ieee trans ind vol apr electric sector failure scenarios impact analyses version electric power research institute tech schmidt radke camtepe foo ren survey analysis gnss spoofing threat countermeasures acm computer survey vol may moussa debbabi assi security assessment time synchronization mechanisms smart grid ieee commun surveys vol thirdquarter shepard humphreys fansler evaluation vulnerability phasor measurement units gps spoofing attacks int crit infrastruct vol zhang gong dimitrovski time synchronization attack smart grid impact analysis ieee trans smart grid vol mar jiang zhang harding makela spoofing gps receiver clock offset phasor measurement units ieee trans power systems vol risbud gatsis taha assessing power system state estimation accuracy pmu measurements ieee trans smart grid published nighswander ledvina diamond brumley brumley gps software attacks proc acm conf comput commun security tippenhauer rasmussen requirements successful gps spoofing attacks proc acm conf comput commun security wesson gross humphreys evans gnss signal authentication via power distortion monitoring ieee trans aeros elect systems vol psiaki humphreys gnss spoofing detection proc ieee vol june papadimitratos jovanovic positioning attacks countermeasures proc ieee military commun san diego usa zeng qian gps spoofing attack time synchronization wireless networks detection scheme design ieee military communications conference fan zhang trinkle dimitrovski song defense mechanism gps spoofing attacks pmus smart grids ieee trans smart grid vol ranganathan capkun spree spoofing resistant gps receiver proc annual int conf mobile comput chou heng gao robust timing phasor measurement units approach proc int tech meeting sat division institute navigation gao advanced vector tracking robust gps time transfer pmus proc institute navigation conf ion ranganathan locher basin short paper detection gps spoofing attacks power grids proc acm conf security privacy wireless mobile jansen tippenhauer gps spoofing detection error models realization proc annual conf comput security han luo meng novel method based particle filter gnss proc ieee int conf commun icc june zhu youssef hamouda detection techniques datalevel spoofing phasor measurement units proc int conf selected topics mobile wireless netw mownet apr shepard humphreys characterization receiver response spoofing attacks proc int tech meeting sat division institute navigation ion gnss portland humphreys ledvina psiaki hanlon kintner assessing spoofing threat development portable gps civilian spoofer proc int tech meeting sat division institute navigation ion gnss savannah motella pini fantino mulassano nicola fortunyguasch wildemeersch symeonidis performance assessment low cost gps receivers civilian spoofing attacks esa workshop sat nav tech european workshop gnss signals signal teunissen quality control integrated navigation systems ieee aeros elect sys magazine vol july heng makela bobba sanders gao reliable timing power systems architecture power energy conf illinois peci heng work gao gps signal authentication cooperative peers ieee trans intell transportation vol radin swaszek seals hartnett gnss spoof detection based pseudoranges multiple receivers proceedings international technical meeting institute navigation masreliez martin robust bayesian estimation linear model robustifying kalman filter ieee trans autom control vol june farahmand giannakis angelosante doubly robust smoothing dynamical processes via outlier sparsity constraints ieee trans signal vol android gnss https accessed ieee standard synchrophasor measurements power systems ieee std revision ieee std model gps satellite clock http php accessed zhang gong dimitrovski time synchronization attack smart grid impact analysis ieee trans smart grid vol march karahanoglu bayram ville signal processing approach generalized total variation ieee trans signal vol boyd vandenberghe convex optimization cambridge university press brown hwang introduction random signals applied kalman filtering matlab exercises solutions new york wiley kay fundamentals statistical signal processing volume detection theory inc ali khalajmehrabadi received degree babol noshirvani university technology iran degree university technology malaysia malaysia awarded best graduate student award currently pursuing degree department electrical computer engineering university texas san antonio research interests include indoor localization navigation systems collaborative localization global navigation satellite system student member institute navigation ieee nikolaos gatsis received diploma hons degree electrical computer engineering university patras greece degree electrical engineering degree electrical engineering minor mathematics university minnesota respectively currently assistant professor department electrical computer engineering university texas san antonio research interests lie areas smart power grids communication networks cyberphysical systems emphasis optimal resource management statistical signal processing symposia area smart grids ieee globalsip ieee globalsip also served editor special issue ieee journal selected topics signal processing critical infrastructures david akopian received degree electrical engineering professor university texas san antonio senior research engineer specialist nokia corporation researcher instructor tampere university technology finland authored coauthored patents publications current research interests include digital signal processing algorithms communication navigation receivers positioning dedicated hardware architectures platforms software defined radio communication technologies healthcare applications served organizing program committees many ieee conferences annual spie multimedia mobile devices conferences research supported national science foundation national institutes health usaf navy texas foundations ahmad taha received degrees electrical computer engineering american university beirut lebanon purdue university west lafayette indiana summer summer spring visiting scholar mit university toronto argonne national laboratory currently assistant professor department electrical computer engineering university texas san antonio taha interested understanding complex systems operate behave misbehave research focus includes optimization control power system observer design dynamic state estimation | 3 |
information capacity direct detection optical transmission systems nov antonio mecozzi fellow osa fellow ieee mark shtaif fellow osa fellow ieee show spectral efficiency direct detection transmission system less spectral efficiency system employing coherent detection modulation format correspondingly capacity per complex degree freedom systems using direct detection lower bit noise propagation channel index capacity optical detection modulation ntroduction ecently field optical communications witnessing revival interest direct detection receivers often viewed promising alternative expensive coherent counterparts process stimulates interesting fundamental question whose answer present paper dedicated difference information capacity direct detection system system using coherent detection order answer question consider channel schematic illustrated fig consists transmitter capable generating desirable complex waveform whose spectrum contained within bandwidth noise source arbitrary spectrum statistics propagation channel receiver although linearity channel additivity noise immaterial analysis assume properties beginning postponing generalization discussion sec direct detection receiver definition one recovers communicated data intensity absolute square value received electric field using single photodiode illustrated fig consists square optical filter width rejects band noise whose output current proportional received optical processing unit recovers information benchmark compare direct detection receiver coherent receiver whose case received optical field reconstructed intuitively tempting conclude since direct detection receiver ignores one two degrees freedom necessary uniquely characterizing electric field capacity close half capacity coherent system surprisingly notion turns incorrect show paper capacity per complex degree freedom systems using direct detection lower mecozzi department physical chemical sciences university aquila aquila italy shtaif department physical electronics tel aviv university tel aviv israel simplicity follows assume proportionality coefficient obpf processing data recovery fig setup considered paper consists transmitter generate complex waveform whose spectrum contained bandwidth stationary noise source arbitrary spectrum distribution propagation channel receiver schematic direct detection receiver incoming optical field filtered optical filter obpf reject band noise square law detected without manipulation field single photodiode photodiode subsequent electronics assumed bandwidth least accommodate bandwidth intensity waveform bit fully coherent systems correspondingly loss terms spectral efficiency limited greater throughout paper order simplify notation assume transmitted field scalar assumption affect generality results transmission orthogonal polarization components linear channels independent elation prior work result stating direct detection channel characterized almost capacity coherent channel requires clarification view apparent contradiction prior work capacity seemingly similar channel found lower approximately factor two work consists ref published authors current manuscript well number recent works relevant contained refs order avoid confusion adopt terminology refer channels studied papers various flavors intensity channel whereas term direct detection channel used reference channel study reason apparent contradiction boils fact versions intensity channel assume information encoded given rate directly onto intensity transmitted optical signal recovered sampling received signal intensity exactly rate cases channel assumed memoryless optical bandwidth hence also spectral efficiency play role studies given practical justification considering old optical systems used optical source diode indeed sources optical phase far noisy used transmitting information source linewidth much greater modulation bandwidth relating spectral efficiency modern sense meaningful contrast direct detection channel inspired modern communications systems vast majority relies highly coherent laser source one whose linewidth substantially smaller bandwidth modulation reason assumption transmitter direct detection channel encode information complex waveform constraint spectrum contained bandwidth addition since process involves frequency doubling spectrum measured intensity contained bandwidth hence sampling rate imperative order extract information present current order clarify difference direct detection channel intensity channel denote field received optical filtering since spectrum contained bandwidth rigorously expressed sinc sinc sin samples carrying transmitted information detected proportional photocurrent sampled rate samples would equal phase information would lost case drop amount extracted information hence capacity would roughly factor similarly results obtained yet direct detection channel sampling done rate samples taken also obtained middle samples given sinc clearly affected phase differences various samples fact final result indicates knowledge intensity samples allows one collect almost information contained complex optical field note idea increasing information rate sampling received analog signal rate higher considered previously natural idea cases receiver contains nonlinear element expands analog bandwidth received waveform sampling longer sufficient order collect information analog waveform case nonlinearity detection expands analog bandwidth exactly factor hence unlike case studied doubling sampling rate produced minuscule benefits sampling sufficient order extract information contained analog intensity waveform benefit increasing sampling rate farther finally instructive relate widespread example additive noise fig white gaussian case theory implies capacity direct detection channel within bit log snr snr ratio average power information carrying signal variance filtered noise summed quadratures conversely demonstrated capacity intensity channel one samples received intensity rate limit high snr log roughly half direct detection channel iii information capacity direct detection receiver definition distinguishable waveforms usually engineering practice two waveforms said distinguishable energy difference greater context optical communications definition restrictive cases interest optical receivers including coherent receivers unable distinguish waveforms differ constant time independent owing reality define waveforms distinguishable told apart ideal coherent receiver formally means distinguishable remain distinguishable according even one rotated complex plane arbitrary constant phase ref capacity high snr limit written log difference snr definition relates noise variance one quadrature order overcome limitation transmitter reciever would share exact scale fraction single optical cycle principle achieved means atomic clock however costs solution one hand minuscule potential benefit terms information rate hand ensure solution deployed fourier coefficients given second equality takes advantage relation fourier series coefficients discrete fourier transform periodic signals assign fourier coefficients clearly exp hence special attention needs paid cases value unit circle degree polynomial admits zeros expressed consider functions fig example different waveforms case intensity shown phases eight waveforms plotted explained text largest number distinguishable waveforms intensity stress waveforms differ constant phase counted distinguishable definition values notice distinguishability means coherent receiver necessarily imply distinguishability means direct detection receiver gap two subject subsection follows multiplicity complex waveforms intensity consider complex signal whose spectrum contained within bandwidth periodic time period integer assumption periodicity limiting factor arguments main results established case addressed assuming limit direct detection receiver exploit intensity order extract transmitted data first claim key proving main arguments paper distinguishable legitimate waveforms whose intensities equal illustration idea case found figure order formally prove statements express fourier series elements one zero since exp functions property action exp produces pure phase modulation hence considered dual filters time frequency interchanged combination functions multiplies change degree resulting polynomial corresponding zeros simply reflected respect unit circle illustrated example multiply product zeros replaced respectively yet modulus product remains identical modulus unit circle particular exp since total functions functions modulus unit circle thus end time waveforms exp whose intensity identical intensity note functions applying pure phase modulation also preserve number elements degree polynomial consequently spectral width resulting time waveforms reason temporal waveforms whose intensity whose spectrum fully contained within bandwidth discussion uniqueness waveforms provided appendix prior concluding section interesting stress highest possible number distinguishable waveforms whose intensity equals actual number waveforms number zeros located unit circle zero falls unit circle easily verified constant phasefactor whose application produce new waveform note also situation arg implications capacity prove following relation information capacity direct detection channel capacity system using coherent detection cases referring capacity per complex degree denote input alphabet channel output alphabet available coherent receiver output alphabet available direct detection receiver denoted since constraints imposed transmitter alphabet contains complex waveforms without restriction alphabet hand contains complex waveforms whereas contains waveforms obtained squaring absolute value waveforms contained communication requires probability prescribed transmission individual waveform effect communications channel noise distortions etc characterized conditional probabilities detecting given element case coherent detection case direct detection given particular element transmitted conditional probabilities denoted respectively mutual information per complex degree freedom transmitter two receivers equals entropy conditional entropy given corresponding equations obtained replacing places capacities obtained maximizing mutual information interestingly equals exactly times within time period also waveform particular intensity number complex degrees freedom product temporal duration signal bandwidth since element alphabet represents time dependent waveform nonetheless order keep notation simple avoid writing leaving time dependence implicit additionally order avoid notation denote probability distribution simply similar practice used elements line simplified notation summation interpreted generalized sense addition values eqs respect transmitted distribution order derive take advantage relation first equality follows fact second equality follows relations last inequality true take functional values given limit large reduces note expressions hold distribution transmitted alphabet means modulation format information per complex degree freedom extracted using direct detection receiver one bit less information per channel use extracted coherent detection set distribution maximizes arrive mutual information corresponds distribution attained clearly hence nonetheless remains smaller equal follows rightside inequality concludes proof finally note capacity per complex degree freedom evaluated sec identical spectral efficiency commonly used term context fiber communications hence spectral efficiency direct detection system smaller system using coherent detection order see two exactly note bandwidth optical signal well number complex degrees freedom transmitted per second xtension onlinear systems non additive noise communications often affected nonlinear propagation phenomena taking place optical fibers effect distort signal also cause nonlinear interaction signal noise case noise longer modeled additive standpoint current study difficulty imposed situation impossible relate spectrum occupied signal constant hence definition spectral efficiency becomes problematic nonetheless must stressed analysis received waveforms sec iii explicitly assume anything type noise propagation therefore results respect capacity optically filtered signal fig remain perfectly valid words filtering information per degree freedom contained received complex optical signal one bit larger information contained intensity said must noted claim positioning square filter front receiver optimal practice nonlinear case nonetheless practical situations encountered fiber communications inclusion filter practically unavoidable iscussion corresponds relevant case opposite limit deduced may challenge one physical intuition predicts equality mutual information values corresponding direct coherent detection reason discussion special case interesting spite fact practical importance whatsoever order resolve apparent conundrum note case represents situation complex field time independent particular phase difference two possible fields also time independent implying fields distinguishable provided intensities differ hence artificial situation coherent receiver advantage direct detection receiver therefore capacities identical another curious point related assumption periodicity convenient choice arriving result sec since band limited spectrum contained within written sinc noted earlier sinc sin impose requirement end bandlimited nonetheless number waveforms whose intensity equals remains order see consider time interval contains interval center assume also tails various sinc functions decay extent signal within extended periodically without introducing bandwidth broadening may apply reasoning sec iii signal interval according number equal intensity waveforms power number zeros coincide unit circle evidently number zeros least zeros fall unit circle zeros times outside range correspond zeros exp unit circle finally important stress consequences definition direct detection requires incoming optical signal detected single per polarization without manipulation signal prior definition excludes use local oscillator coherent detection also selfcoherent schemes ones proposed schemes kind considered acknowledgement mecozzi acknowledges financial support italian government project incipict shtaif acknowledges financial support israel science foundation grant ppendix discussion sec iii involved statement number distinguishable complex waveforms characterized bandwidth period greater one justification claim functions produce pure phase modulation increase order polynomial hence increase bandwidth form one zeros constant order functions change intensity waveform one must amplitude function transfer function unit circle means functions applied without changing neither order polynomial intensity waveform functions specified indeed number function combinations exceed also present alternative proof based uniqueness functions given periodic waveform shown reflecting zeros respect unit circle described sec one obtains different waveform intensity bandwidth timeperiod look arbitrary given waveform exp specified characteristics identify zeros chose reflect zeros inside thep unit circle thereby producing new waveform exp intensity different phase since spectrum contained since zeros outside unit circle belongs special class functions famously known functions one well known properties functions immaterial constant requirement manipulation signal prior photodetection replaced requirement manipulation filtering dispersion applied prior reason pass filtering also done transmitter hence affect assumption work phase uniquely determined intensity means hilbert transform namely log designates hilbert transform unknown constant since waveforms differing constant phase indistinguishable definition see sec conclude minimum phase function corresponds given intensity profile unique uniqueness function implies waveform set distinguishable equal intensity waveforms periodic obtained waveform set means functions form given whose effect reflect zeros waveforms acts upon different waveforms set would produced different minimum phase functions therefore total number distinguishable waveforms set exceed eferences randel breyer lee walewski advanced modulation schemes optical communications ieee sel topics quantum electron takahara tanaka nishihara kai tao rasmussen discrete optical access networks optical fiber communication conference osa technical digest online optical society america paper weiss yeredor shtaif iterative symbol recovery power efficient biased optical ofdm systems ieee lightwave technol lowery armstrong multiplexing dispersion compensation optical systems opt express schmidt lowery armstrong experimental demonstrations electronic dispersion compensation transmission using optical ofdm lightwave technol che chen shieh spectrally efficient optical transmission based stokes vector direct detection opt express randel pilori chandrasekhar raybon winzer transmission ssmf using modulation novel scheme proc european conference optical communications valencia spain paper schuster randel bunge lee breyer spinnler petermann spectrally efficient compatible modulation ofdm transmission direct detection ieee photon technol letters mecozzi antonelli shtaif coherent receiver optica antonelli mecozzi shtaif pam transceiver optical fiber communication conference osa technical digest online optical society america paper chen antonelli chandrasekhar raybon sinsky mecozzi shtaif winzer singlepolarization transmission standard singlemode fiber using detection optical fiber communication conference osa technical digest online optical society america post deadline paper erkilinc shi sillekens galdino thomsen bayvel killey ssbi mitigation scheme transmission electronic dispersion compensation lightwave technol mecozzi shtaif capacity intensity modulated systems using optical amplifiers ieee photon technol lett lapidoth phase noise channels high snr proc ieee information theory workshop itw bangalore india hranilovic kschischang capacity bounds optical intensity channels corrupted gaussian noise ieee transactions information theory katz shamai distribution noncoherent partially coherent awgn channels ieee transactions information theory lapidoth moser wigger capacity freespace optical intensity channels ieee transactions information theory gilbert increased information rate oversampling ieee transactions information theory shannon mathematical theory communications bell system technical journal july october xiang liu chandrasekhar andreas leven digital detection opt express shechtman eldar cohen chapman miao segev phase retrieval application optical imaging contemporary overview ieee signal processing magazine gang wang georgios giannakis yonina eldar solving systems random quadratic equations via truncated amplitude flow available https alan oppenheim ronald schafer signal processing upper saddle river signal processing series | 7 |
extracting three dimensional surface model human kidney visible human data set using free software kirana kumara centre product design manufacturing indian institute science bangalore india kiranakumarap corresponding author phone fax abstract three dimensional digital model representative human kidney needed surgical simulator capable simulating laparoscopic surgery involving kidney buying three dimensional computer model representative human kidney reconstructing human kidney image sequence using commercial software involve sometimes significant amount money paper author shown one obtain three dimensional surface model human kidney making use images visible human data set free software packages imagej meshlab particular images visible human data set software packages used cost anything hence practice extracting geometry representative human kidney free illustrated present work could free alternative use expensive commercial software purchase digital model keywords visible human data set kidney surface model free introduction laparoscopic surgery often substitute traditional open surgery human kidney organ operated upon choosing laparoscopic surgery open surgery reduces trauma shortens recovery time patient since laparoscopic surgery needs highly skilled surgeons preferable use surgical simulator training evaluating surgeons surgical simulator simulate laparoscopic surgery human kidney needs virtual kidney computer digital three dimensional model representative human kidney currently mainly two approaches practiced obtain geometry representative human kidney first approach buy readily available model human kidney online store second approach use commercial software packages mimics amira reconstruct geometry kidney two dimensional image sequence one see approaches cost sometimes significant amount money present work shows possible obtain three dimensional surface model representative human kidney completely free present approach make use free software packages extract geometry human kidney images visible human data set vhd also known visible human project image data set visible human project data sets free software packages used imagej meshlab one note images vhd may downloaded free obtaining free license national library medicine nlm part national institutes health nih vhd part ambitious visible human project vhp approaches similar approach presented present paper may found present author previous works although free software packages used works ones used present work images vhd images used downloaded images longer accessible images downloadable sometime back also discuss reconstruction pig liver present work deals reconstruction human kidney upon conducting literature review one see authors used vhd together commercial software packages also authors used images sources vhd used commercial software packages perform reconstruction biological organs also authors used free software packages extract geometry biological organs present author could find source literature three free software packages imagej meshlab used obtain surface model human kidney images vhd practice extracting geometry representative human kidney free presented present work could free alternative use expensive commercial software purchase digital model material method present work images vhd three free open source software packages imagej meshlab form material far method concerned three software packages used reconstruct models human kidney images vhd vhd contains mri cryosection images work normal images visible human male female used present work uses images png format since format recommended vhp file size images small images good enough reconstructing model whole kidney inner finer details kidney present reconstructed model represents outer surface kidney one easily identify human kidney images vhd present work imagej used form image stack contains kidney version used segmentation reconstruction correct scale meshlab used control level detail reconstructed model also serves tool smoothen model reduce file size method explained bit detail following subsections using imagej form image stack images visible human male female available head toe images one identify images belong kidney upon viewing individual images imagej upon consulting one conclude visible human male left right kidneys contained images images total similarly visible human female left right kidneys contained images images total images male images female copied two separate empty folders form image stack male select menu item file import image browse location folder containing images select first image folder follow prompts default options images displayed imagej image stack select menu item file save raw save image stack format name given similar procedure may followed obtain image stack female using perform segmentation reconstruction segmentation reconstruction correct scale hence header information images image stack essential vhd contains header information images database upon going header files images male one note following header information identical images image matrix size image matrix size image dimension image dimension image pixel size image pixel size screen format bit spacing scans similarly following header information identical female images image matrix size image matrix size image dimension image dimension image pixel size image pixel size screen format bit spacing scans method reconstructing left kidney male explained detail illustrations method may employed reconstruct right kidney male left right kidneys female select menu item file open greyscale browse location image stack male follow prompts supply header information noted first paragraph subsection missing header information supplied image stack male image dimensions voxel spacing voxel representation bit unsigned header information supplied image stack displayed one browse images image stack illustration purposes image image image stack vhd shown figure figure respectively also left right kidneys identified figure figure making use illustrations right kidney left kidney figure image image stack right kidney left kidney figure image image stack task segmentation select polygon tool iris toolbox manual segmentation select continuous radio button polygon tool click drag mouse cursor along edge left kidney seen axial view window carefully draws contour edge left kidney right click image select accept button create segmentation image display process repeated images image stack contain pixels belong left kidney illustration purposes image image image stack segmentation shown figure figure respectively segmented left kidney figure image image stack segmentation segmented left kidney figure image image stack segmentation segmentation reconstruction carried accomplished menu item segmentation save following prompts browsing location reconstructed model stored giving name format file represents reconstructed model path complete path file name reconstruction left kidney visible human male similar process may followed reconstruct right kidney visible human male left right kidneys visible human female using meshlab reduce total number faces describing model model kidney obtained use typically large size typically described large number surface triangles meshlab could helpful reducing total number surface triangles needed describe model satisfactorily also serves tool smoothen reconstructed geometry using smoothing features provided meshlab may necessary scale reconstructed models correct dimensions original dimensions strictly retained meshlab also improve triangle quality surface triangles model also reduce file size models kidney undergoing processing meshlab shown next section results reconstructed left kidney male undergoing processing meshlab shown figure similarly reconstructed right kidney male undergoing processing meshlab shown figure reconstructed left kidney female shown figure reconstructed right kidney female shown figure four models made surface triangles obtaining four models job meshlab smoothen models reconstructed reduce total number surface triangles figure reconstructed left kidney male figure reconstructed right kidney male figure reconstructed left kidney female figure reconstructed right kidney female discussion work model human kidney extracted images vhd using free software packages free software packages used imagej itksnap meshlab organs reconstructed left kidney visible human male right kidney visible human male left kidney visible human female right kidney visible human female four models stl format use free software packages together images may obtained free done present work makes possible obtain geometry representative human kidney completely free buying model human kidney using commercial software package extract models image sequences cost sometimes significant amount money also present approach user control finely geometry described using free software package meshlab since meshlab improve quality surface mesh describes reconstructed model reconstructed model undergone processing meshlab used finite element analysis converting surface model solid model using software packages like rhinoceros also method used extract geometry kidney illustrated present work may possibly used extract whole biological organs vhd may noted method given obtain models human kidney need followed rigidly good read documentation software packages used one experiment various options provided software packages instead rigidly following method illustrated work example instead tracing boundary kidney images mouse pointer paintbrush tool provided tried carry segmentation itksnap also provides tool segmentation limitations present work uses images although found sufficient obtain geometry whole kidney whenever reconstructed geometry include finer details kidney whenever organ extracted vhd possibility types images mri images suited cases also multiple software packages need downloaded installed used future work extract biological organs vhd using free software packages aim reconstruct biological organs inner details obtaining outer surface organs use types images like mri cryosection images vhd need conclusion possible obtain surface model representative human kidney images vhd using free software packages free software packages needed imagej meshlab practice extracting geometry representative human kidney completely free illustrated present work could free alternative use expensive commercial software packages purchase digital model acknowledgements author grateful robotics lab department mechanical engineering centre product design manufacturing indian institute science bangalore india providing necessary infrastructure carry work author acknowledges ashitava ghosal robotics lab department mechanical engineering centre product design manufacturing indian institute science bangalore india providing images visible human data set vhd author acknowledges national library medicine nlm visible human project vhp providing free access visible human data set vhd ashitava ghosal visible human data set vhd anatomical data set developed contract national library medicine nlm departments cellular structural biology radiology university colorado school medicine references jay bishoff louis kavoussi online laparoscopic surgery kidney available http accessed july issenberg mcgaghie hart mayer felner petrusa waugh brown safford gessner gordon ewy simulation technology health care professional skills training assessment journal american medical association http accessed july http accessed july http accessed july http accessed july http accessed july http accessed july rasband imagej national institutes health bethesda maryland usa http abramoff magelhaes ram image processing imagej biophotonics international volume issue http accessed july paul yushkevich joseph piven heather cody hazlett rachel gimpel smith sean james gee guido gerig active contour segmentation anatomical structures significantly improved efficiency reliability neuroimage http accessed july meshlab visual computing lab isti cnr http accessed july http accessed july http accessed july http accessed july kirana kumara ashitava ghosal procedure reconstruction biological organs image sequences proceedings beats international conference biomedical engineering assistive technologies beats ambedkar national institute technology jalandhar india kirana kumara online reconstructing solid model scanned images biological organs finite element simulation available http accessed july http accessed july aimee sergovich marjorie johnson timothy wilson explorable threedimensional digital model female pelvis pelvic contents perineum anatomical education anatomical sciences education dong sun shin jin seo park shin min suk chung surface models male urogenital organs built visible korean using popular software anatomy cell biology amy elizabeth kerdok characterizing nonlinear mechanical response liver surgical manipulation thesis division engineering applied sciences harvard university lou shu wei liu zhen mei zhao pheng ann heng chun tang zheng ping yong ming xie yim pan chui segmentation reconstruction hepatic veins intrahepatic portal vein based coronal sectional anatomic dataset surgical radiologic anatomy chen zhang xiong tan yang yang dong reconstruction digitized human liver based chinese visible human chinese medical journal gao reconstruction liver slice images based mitk framework international conference bioinformatics biomedical engineering icbbe doi http accessed july henry gray anatomy human body philadelphia lea febiger | 5 |
face rings cycles associahedra standard young tableaux aug anton dochtermann abstract show ideal free resolution supported simplicial associahedron resolution minimal case betti numbers strictly smaller show fact betti numbers bijection number standard young tableaux shape complements fact number faces given number standard young tableaux super shape bijective proof result first provided stanley application discrete morse theory yields cellular resolution show minimal first syzygy furthermore exhibit simple involution set associahedron tableaux fixed points given betti tableaux suggesting morse matching particular poset structure objects introduction paper study intriguing connections basic objects commutative algebra combinatorics arbitrary field let denote polynomial ring variables let denote edge ideal complement definition ideal generated degree monomials corresponding diagonals one also realize ideal cycle thought simplicial complex figure ideals course simple algebraic objects homological properties one verify gorenstein ring dimension hence projective dimension fact minimal free resolution described explicitly cellular realizations provided biermann recently sturgeon date august wish investigate combinatorics involved resolutions original interest cellular resolutions came fact ideal almost linear resolution sense nonzero entries differentials minimal resolution linear forms except last syzygy nonzero entries degree recent work combinatorial commutative algebra seen considerable interest cellular resolutions monomial binomial ideals see example almost cases ideals consideration linear resolutions seek extend constructions construction cellular resolution one must construct faces labeled monomials generate ideal case well known geometric object whose vertices labeled diagonals namely simplicial associahedron definition simplicial complex vertex set given diagonals faces given collections diagonals facets triangulations catalan number many well known spherical fact realized boundary convex polytope addition natural way associate monomial face first part paper show labeled facial structure considered polytope single interior cell encodes syzygies theorem natural monomial labeling complex supports free resolution ideal resolution supported associahedron minimal particular case faces monomial labeling completely understood closed form written fact number faces equal number standard young tableaux shape bijective proof first provided stanley since resolution supported know provides upper bound betti numbers equality case second part paper show betti numbers given standard young tableaux set subpartitions involved stanley bijection theorem total betti numbers module given number standard young tableaux shape bijection along application hook formula leads closed form expression betti numbers addition fact partition conjugate provides nice combinatorial interpretation palindromic property betti numbers gorenstein ring fact theory identify betti numbers certain faces suggests may possible collapse away faces obtain minimal resolution employing algebraic version morse theory due batzies welker indeed certain geometric properties subdivision along almost linearity imply certain faces must matched away able write morse matching involving edges number unmatched critical cells precisely corresponding first syzygy module see proposition leads minimal resolutions cases addition identification betti numbers faces standard young tableaux leads consider partial matching set associahedron tableaux unmatched elements correspond betti numbers hope would import poset structure face poset extend matching morse matching trouble last step stanley bijection give explicit labeling faces standard young tableaux choices involved bijection recursively defined however define simple partial matching set standard young tableaux shape unmatched elements naturally thought standard young tableaux shape deleting largest entries see proposition suggests poset structure set standard young tableax extends covering relation rest paper organized follows begin section basics regarding commutative algebra involved study section discuss associahedra role resolutions turn standard young tableaux section establish results regarding betti numbers section discuss applications discrete morse theory related matchings stand young tableaux end open questions commutative algebra let denote ideal definition ideal generated degree monomials corresponding diagonals interested combinatorial interpretations certain homological invariants particular combinatorial structure minimal free resolution recall free resolution exact sequence free differential maps graded resolution minimal minimum among resolutions case called graded betti numbers also case number length minimal resolution called projective dimension main tool calculating betti numbers hochster formula see example gives formula betti numbers ring associated simplicial complex theorem hochster formula simplicial complex vertex set let denote ring betti numbers given dimk denotes simplicial complex induced vertex set cellular resolution monomial labeling faces algebraic chain complex computing cellular homology supports resolution refer section details precise definitions next collect easy observations regarding betti numbers since ideal triangulated sphere see gorenstein krull dimension formula implies projective dimension says whenever easy application hochster formula also implies minimal resolution linear last nonzero term mean also sense ideals almost linear resolution mentioned introduction convention since one value without loss generality sometimes drop use denote betti numbers asssociahedra let denote dual associahedron simplicial complex whose vertices given diagonals labeled regular facets given triangulations collections diagonals intersect interior well known homeomorphic sphere fact polytopal several embeddings often dual simple polytope described throughout literature see good account history use denote simplicial polytope including interior wish describe monomial labeling faces recall vertex corresponds diagonal simply label vertex monomial label faces least common multiple vertices contained face wish show simple labeling associahedron supports resolution figure complex monomial labeling partially indicated let first clarify terms simplify notation associate monomial xinn vector freely move notations define labeled polyhedral complex polyhedral complex together assignment face max labeled polyhedral complex consider ideal generated monomials corresponding vertices usual identify element exponent vector monomial topological space underlying chosen orientation associated chain complex spaces computes cellular homology since monomial labels cells homogenize differentials respect basis way becomes complex free modules polynomial ring say polyhedral complex supports resolution ideal fact graded free resolution details examples cellular resolutions refer let denote subcomplex consisting faces componentwise following criteria also lemma let labeled polyhedral complex let denote associated monomial ideal generated vertices supports cellular resolution complex empty futhermore resolution minimal pair faces criteria place establish following theorem associahedron monomial labeling described supports cellular resolution edge ideal proof let denote simplicial associahedron monomial labeling construction vertices correspond generators show supports resolution according lemma enough show subcomplex let let denote subcomplex consisting faces monomial labeling divides usual thinking exponent vector monomial particular face element every diagonal claim contractible hence note since squarefree may assume entries hence identify subset also convex polytope hence contractible fewer nonzero entries empty without loss generality may assume let largest integer since see diagonal vertex simplicial complex fact element every facet since diagonal picked elements intersects conclude cone hence contractible one check resolution fact minimal longer case particular faces monomial label standard young tableaux turns number faces associahedron entries face vector given number standard young tableau syt certain shapes recall partition standard young tableaux shape filling young diagram distinct entries rows columns increasing see example let denote number ways choose diagonals convex two diagonals intersect interior see precisely number faces polytope result attributed cayley according asserts using hook length formula one see number also number standard young tableaux shape usual denotes sequence entries value fact apparently first observed hara zelevinsky unpublished simple bijection given stanley example take obtain catalan number example shape given standard young tableaux correspond diagonals turns betti numbers rings also counted number standard young tableaux certain related sub shapes establish result employ hochster formula theorem recall ring recovered ring thought simplicial complex note nonzero contribution equation comes reduced homology number connected components induced complex minus one let denote betti numbers ring equation implies unless another application equation gives cases following result remaining theorem betti numbers given number standard young tableau shape proof establish equality equation showing sides equation satisfy recursion betti numbers left hand side use hochster formula via equation involves subcomplexes given subsets computation size first suppose chosen recover contribution equation homology induced subsets size cycle vertices namely however get additional contribution given isolated point instances next suppose recover contribution homology induced subsets size cycle quantity given case additional contribution coming subsets including since subsets disconnected putting together recovering equation next consider right hand side equation namely number standard young tableaux shape recall fillings involve picking entries one set entry first row necessarily last column recover fillings standard young tableaux shape entry last row recover fillings standard young tableaux shape counts miss standard tableaux entry second row necessarily second column case must entry first row first column free choose increasing sequence length fill remaining entries first row rest entries determined choices adding three counts gives desired recursion equation next check initial conditions hochster formula gives one check see example precisely standard young tableau shape conjugate shape arbitrary given number generators hand standard young tableau shape pair occupy second row except hence number fillings also given similarly arbitrary hochster formula implies betti numbers given choices vertices corresponding complements diagonals since remaining pair vertices disconnected hence also follows fact ring gorenstein therefore palindromic sequence betti numbers terms tableaux see shape conjugate hence shapes number fillings remark application hook length formula gives explicit value betti numbers version paper posted arxiv pointed author formula previously established combinatorial proof given remark seen rings gorenstein hence betti numbers palindromic sense realization betti numbers terms standard young tableaux theorem provides nice combinatorial interpretation property partition conjugate partition hence number fillings example resolution represented homological degree basis free module given standard young tableaux indicated shape note conjugate discrete morse theory matchings seen associahedron monomial labeling described supports resolution ideal also seen resolution minimal particular labeling produces distinct faces monomial labeling fact increases resolution becomes minimal sense number facets catalan number order dominates dimension second highest syzygy module order example face numbers versus betti numbers indicated refers number faces associahedron morse matchings first syzygies batzies welker others see developed theory algebraic morse theory allows one match faces labeled complex order produce resolutions become closer minimal usual combinatorial description theory one must match elements face poset labeled complex monomial labeling matching must also satisfy certain acyclic condition described refer details closer analysis monomial labeling reveals certain faces must matched away minimal resolution sense associated monomial wrong degree particular since know almost linear resolution described must case minimal cellular resolution face labeled monomial degree labeling property monomial associated face given product variables involved choice diagonals particular properly labeled face corresponds subdivision diagonals involving precisely vertices motivates following definition suppose subdivision mean collection diagonals say proper set endpoints diagonals exactly elements vertices say superproper uses vertices subproper uses less figure three superproper subdivisions two subproper subdivisions subdivisions proper fact explicitly describe partial morse matching face poset perfect rank superproper subdivision simply pair disjoint diagonals say face poset match otherwise subproper subdivision inscribed triangle say diagonals match face proper recall hasse diagram face poset graph vertices given faces edges given cover relations easy association matching hasse diagram face poset clearly algebraic sense matched faces monomial labeling typical think hasse diagram directed graph orientation matched edge pointing increasing dimension unmatched edges pointing collection faces involved matching called critical cells form subposet original poset main theorem algebraic discrete morse theory says acyclic algebraic matching hasse diagram cellular resolution critical cells form also supports cellular resolution way one obtains resolution closer minimal case following result proposition matching monomial labeled face poset described acyclic furthermore number unmatched critical edges given proof first make simple observation corresponding subproper subdivision words inscribed triangle must path length proper subdivision similarly path length upward oriented edge must case inscribed triangle vertex set implies cycles oriented face poset involving proper subdivisions paths length next suppose upward oriented edge face poset consists two disjoint diagonals superproper subdivision according matching must case path length form cycle face poset must downward edge according matching must case path length hence observation previous paragraph implies cycles exist conclude matching acyclic next count unmatched edges first observe number proper subdivisions diagonals given see note diagonals involved subdivision must form path length designate middle vertex path choices choices remaining two vertices next claim number subproper subdivisions diagonals necessarily forming inscribed triangle given see first count inscribed triangles ordered vertex set free choose first vertex among nodes cycle second vertex two cases choose among two vertices distance left choices choose among vertices distance choices left choices total inscribed triangles ordered vertex set dividing forget ordering gives desired count described match superproper subdivisions proper subdivision match subproper subdivisions proper subdivision hence matching number critical edges given precisely see remark completes proof hence simple matching leaves precisely number critical require rank first free module resulting cellular resolution equal rank first syzygy module example matching fact leads minimal resolution case three superproper subdivisions namely two subproper subdivisions namely figure monomial labeled five pairs faces matched shaded faces improper subdivisions resulting complex right supports minimal resolution remark procedure described extended case leave details reader point case superproper subdivisions pairs disjoint diagonals corresponding edges match subproper subdivisions inscribed triangles corresponding seven get matched edge superproper subdivisions forests consisting three edges two components corresponding seven get matched subproper subdivisions inscribed triangles pendant edge corresponding fourteen get matched resulting edges faces faces desired unfortunately know extend matching procedure general see next section comments regarding involution associahedron tableaux recall faces associahedron counted standard young tableaux certain shapes betti numbers counted standard young tableaux certain subshapes motivated discrete morse theory leads ask whether find matching set associahedron tableaux unmatched elements correspond betti numbers matching property two matched tableaux differ cardinality one let emphasize since poset structure elements pointing searching orse matching let first fix notation definition fixed call collection standard young tableaux shape associahedron tableaux denoted standard young tableaux syzygy tableaux denoted shape let note element boxes whereas element boxes associahedron tableux largest entries second row positions naturally becomes syzygy tableau removing boxes particular say particular associahedron tableau restrict syzygy tableaux way natural inclusion example associahedron tableau left restricts syzygy tableau whereas associahedron tableau right next describe involution set fixed elements precisely elements restrict standard young tableau use denote number boxes underlying partition proposition exists involution set fixed point set precisely set tableaux restrict furthermore proof suppose associahedron tableau restricts syzygy tableau set otherwise element second row let largest element property must last element first row else bottom element first column latter case bottom element first column bring element first row add element end second row defines former case last element first row obtain bringing element bottom first column deleting last element second row must clear example example involution matching associahedron tableau shape one shape given following questions end number questions arise study seen section number dissections using diagonals well understood given number standard young tableaux shape context enumerating betti numbers ideal interested subdivisions involved fixed number vertices define number ways choose diagonals convex set endpoints consists precisely vertices question nice formula related standard young tableaux shape note take varying gives refinement catalan numbers far know appeared elsewhere first refinements related question would consider subdivisions collection diagonals forms connected tree since likely relevant property context syzygies happens proper subdivisions correspond collections diagonals form tree however exist proper subdivisions trees example take diagonals form triangle vertices along one disconnected diagonal total using vertices question many dissections diagonals property set diagonals forms tree quest morse matching monomial labeled face poset associahedron unable employ stanley bijection faces standard young tableaux mentioned difficulty arises bijection given recursively defined involves certain choices however fact face poset labeled standard young tableaux suggests might meaningful poset structure set standard young tableaux least set associahedron tableaux hope would poset structure extends partial order given involution described proof proposition hence poset graded number boxes underlying partition restrict young lattice one forgets fillings refer example example cover relation two standard young tableaux underlying partitions related young lattice question exist meaningful poset structure set standard young tableaux consistent conditions described finally see figure minimal resolution supported polytope mentioned construction bit hoc lead following question ideal minimal cellular resolution supported necessarily polytope work direction along generalizations currently pursued linusson acknowledgements thank ken baker assistance figures alex jakob jonsson michelle wachs helpful conversations alex first realized potential connection standard young tableaux inputting betti numbers oeis years ago thanks also anonymous referee careful reading references batzies welker discrete morse theory cellular resolutions reine angew math bayer sturmfels cellular resolutions monomial modules reine angew math biermann cellular structure minimal resolution edge ideal complement submitted braun browder klee cellular resolutions ideals defined nondegenerate simplicial homomorphisms israel math bruns hibi partially ordered sets pure resolutions european combin ceballos santos ziegler many realizations associahedron combinatorica choi kim combinatorial proof formula betti numbers stacked polytope electron research paper dochtermann cellular resolutions cointerval ideals math dochtermann mohammadi cellular resolutions mapping cones combin theory ser linusson personal communication francisco mermin schweig catalan numbers binary trees pointed pseudotriangulations european combin goodarzi cellular structure resolution algebr comb mermin resolution cellular commut algebra miller sturmfels combinatorial commutative algebra graduate texts mathematics vol springer new york nagel reiner betti numbers monomial ideals shifted skew shapes electron combin special volume honor anders research paper encyclopedia integer sequences published electronically http sinefakopoulos borel fixed ideals generated one degree algebra morse theory algebraic viewpoint trans amer math soc stanley polygon dissections standard young tableaux combin theory ser sturgeon personal communication | 0 |
apr dsp implementation direct adaptive feedfoward control algorithm rejecting repeatable runout hard disk drives jinwen pan prateek shah roberto horowitz department mechanical engineering department mechanical engineering department mechanical engineering university california berkeley university california berkeley university california berkeley berkeley california berkeley california berkeley california email jinwen email prateekshah email horowitz abstract direct adaptive feedforward control method tracking repeatable runout rro bit patterned media recording bpmr hard disk drives hdd proposed technique estimates system parameters residual rro simultaneously constructs feedforward signal based known regressor improved version proposed algorithm avoid matrix inversion reduce computation complexity given results matlab simulation digital signal processor dsp implementation provided verify effectiveness proposed algorithm briefly listed rro profile unknown rro frequency spectrum spread beyond bandwidth servo system therefore amplified feedback controller rro spectrum contains many harmonics spindle frequency harmonics attenuated increases computational burden controller rro profile changing track track varying radial direction hdd servo dynamics changes drive drive temperature remainder paper organized follows section presents direct adaptive feedforward control algorithm section shows real time dsp implementation results introduction data bits ideally written concentric circular tracks conventional hdds use magnetic disks continuous media process different bit patterned media recording since data written tracks predetermined shapes created lithography disk shown fig trajectories required followed servo system bpmr servo tracks characterized servo sectors written disk deviation servo track ideal circular shape called rro therefore servo controller bpmr follow rro unknown time design result servo control methodologies used conventional drives applied bpmr directly prior works proposed indirect adaptive control methods mechatronic devices compensate unknown disturbances rro dynamics mismatches paper propose direct adaptive control method address challenges specific bpmr servo tracks conventional media data tracks media figure servo track dotted blue data track solid red conventional media control design architecture considered servo control system shown fig feedforward controller designed hdd without loss generality chose vcm example transfer function vcm input pes submitted asme conference information storage processing systems copyright asme estiwhere mate since unknown vector nonzero vector formed based magnitude phase number frequencies cancel updating law figure control architecture exogenous excitation signal feedforward signal unknown rro known frequencies pes aim design adaptive controller generates order fade frequency contents error signal selective frequencies correspond harmonics spindle frequency harmonics case inverse involves inverting estimated magnitudes might small transition especially initialized zeros case small fluctuation cause large transient error smoothing magnitude phase designed relax transient errors basic direct adaptive feedforward control algorithm summarized table basic direct adaptive feedforward control fig pes written expanded bnb residual error regressor rro known frequencies regressor form pes initialize regressors apply vcm subtract pes determine estimate error update parameters using update matrix compute inverse update using compute table basic direct adaptive feedforward control estimation improved direct adaptive feedforward control mentioned earlier computational complexity inverting grows number frequencies increases crucial burden dsp implementation section provide improved version avoid matrix inversion applying swapping lemma ana bnb regressors estimates updating law decreasing gain indicates system residual rro estimated simultaneously feedforward control signal constructed using regressor rro yielding therefore updating law note matrix inverse required improved direct adaptive feedforward control algorithm summarized table first three steps table noted proposed direct adaptive feedforward control algorithm improved version directly extended actuator responsible high frequency rro using instead approximately copyright asme acknowledgment plitude spectrum adaptive controller adaptive controller financial support study provided grant advanced storage technology consortium astc references shahsavari keikha zhang horowitz adaptive repetitive control design online secondary path modeling application media recording magnetics ieee transactions keikha shahsavari horowitz probabilistic approach robust controller design servo system irregular sampling control automation icca ieee international conference ieee kempf messner tomizuka horowitz comparison four repetitive control algorithms ieee control systems magazine shahsavari keikha zhang horowitz repeatable runout following bit patterned media recording asme conference information storage processing systems american society mechanical engineers shahsavari keikha zhang horowitz adaptive repetitive control using modified filteredx lms algorithm asme dynamic systems control conference american society mechanical engineers shahsavari pan horowitz adaptive rejection periodic disturbances acting linear systems unknown dynamics arxiv preprint zhang keikha shahsavari horowitz adaptive mismatch compensation vibratory gyroscopes inertial sensors systems isiss international symposium ieee zhang keikha shahsavari horowitz adaptive mismatch compensation rate integrating vibratory gyroscopes improved convergence rate asme dynamic systems control conference american society mechanical engineers bagherieh shahsavari horowitz online identification system uncertainties using coprime factorizations application hard disk drives asme dynamic systems control conference american society mechanical engineers nic vcm inp figure spectrum comparison inp step step figure feedforward signal vcm construct matrix compute residual error update using compute table improved dreict adaptive feedforward control experiment results conclusion implement two algorithms matlab simulation real time experiment setup hdd simulation rro together nrro modeled real system measurement data since simulation experiment results close experiment results using improved version shown fig rro reduced nrro level simulation well experiments vcm responsible low frequency rro harmonics responsible high frequency rro harmonics result feedforward control signal one disk revolution shown fig vcm consists low frequency contents high frequency components copyright asme | 3 |
author version work posted permission personal use redistribution final publication published proceedings conference principles security trust post available jan information flow control webkit javascript bytecode abhishek vineet deepak christian saarland university germany germany abstract websites today routinely combine javascript multiple sources trusted untrusted hence javascript security paramount importance specific interesting problem information flow control ifc javascript paper develop formalize implement dynamic ifc mechanism javascript engine production web browser specifically safari webkit engine ifc mechanism works level javascript bytecode hence leverages years industrial effort optimizing source bytecode compiler bytecode interpreter track explicit implicit flows observe moderate overhead working bytecode results new challenges including extensive use unstructured control flow bytecode complicates lowering program context taints unstructured exceptions complicate matter need make ifc analysis permissive explain address challenges formally model javascript bytecode semantics instrumentation prove standard property terminationinsensitive present experimental results optimized prototype keywords dynamic information flow control javascript bytecode taint tracking control flow graphs immediate analysis introduction javascript indispensable part modern web websites use computation web applications aggregator websites news portals integrate content various mutually untrusted sources online mailboxes display advertisements components glued together dynamic nature permits easy inclusion external libraries code encourages variety code injection attacks may lead integrity violations confidentiality violations like information stealing possible wherever code loaded directly another web page loading code separate iframes protects main frame policy hinders interaction mashup pages crucially rely guarantee absence attacks information flow control ifc elegant solution problems ensures security even presence untrusted buggy code ifc differs traditional ifc extremely dynamic makes sound static analysis difficult therefore research ifc focused dynamic techniques techniques may grouped four broad categories first one may build custom interpreter source turns extremely slow requires additional code annotations handle control flow like exceptions break continue second could use technique wherein interpreter wrapped monitor nontrivial doable moderate overhead implemented secure sme however sme technique clear generalized beyond handle declassification third variant inline reference monitoring irm might inline taint tracking client code existing security systems irm require subsetting language order prevent dynamic features invalidate monitoring process finally possible instrument runtime system existing engine either interpreter compiler jit monitor program requires adapting respective runtime incurs moderate overhead retains optimizations within runtime resilient subversion attacks work opt last approach instrument production engine track taints dynamically enforce noninterference specifically instrument bytecode interpreter webkit engine used safari browsers major benefit working bytecode interpreter opposed source retain benefits years engineering efforts optimizing production interpreter source bytecode compiler describe key challenges arise dynamic ifc bytecode opposed source present formal model bytecode webkit interpreter instrumentation present correctness theorem list experimental results preliminary evaluation optimized prototype running safari work significantly advances ifc main contributions formally model webkit bytecode syntax semantics instrumentation ifc analysis prove far aware first formal model bytecode engine nontrivial task webkit bytecode language large bytecodes built model careful thorough understanding approximately lines actual interpreter unlike prior work interested modeling semantics specified ecmascript standard goal remain faithful production bytecode interpreter formalization based webkit build last build started work using ideas prior work use static analysis immediate restrict overtainting even bytecode pervasive unstructured conditional jumps extend prior work deal exceptions technique covers unstructured control flow including break continue without requiring additional code annotations prior work improves permissiveness make ifc execution permissive propose implement variant check implement complete ifc mechanism webkit observe moderate overheads limitations list limitations work clarify scope although instrumentation covers webkit bytecodes yet instrumented modeled native methods including manipulate document object model dom ongoing work beyond scope paper like prior work sequential theorem covers single invocations interpreter reality reactive interpreter invoked every time event like mouse click handler occurs invocations share state dom expect generalizing reactive require instrumentation beyond already plan dom finally handle considerably engineering effort jit handled inlining ifc mechanism bytecode transformation due lack space several proofs details model omitted paper found technical appendix section related work three classes research closely related work formalization semantics ifc dynamic languages formal models web browsers maffeis present formal semantics entire specification foundation guha present semantics core language models essence argue translated core extends include accessors eval work goes one step formalizes core language production engine webkit generated compiler included webkit recent work bodin presents coq formalization ecmascript edition along extracted executable interpreter formalization english ecmascript specification whereas formalize bytecode implemented real web browser information flow control active area security research widespread use research dynamic techniques ifc regained momentum nonetheless static analyses completely futile guarnieri present static abstract interpretation tracking taints however omnipresent eval construct supported approach take implicit flows account chugh propose staged information flow approach perform static policy checks statically available code generate residual must applied dynamically loaded code approach limited certain constructs excluding dynamic features like dynamic field access construct austin flanagan propose purely dynamic ifc languages like use nsu check handle implicit flows strategy permissive nsu retains build strategy present dynamic ifc bytecode static analysis determine implicit flows precisely even presence control flow like break continue nsu leveraged prevent implicit flows overall ideas dealing unstructured control flow based work contrast paper formalization bytecodes proof correctness implicit flow due exceptions ignored hedin sabelfeld propose dynamic ifc approach language models core features ignore constructs control flow like break continue approach leverages dynamic type system source improve permissiveness subsequent work uses testing detects security violations due branches executed injects annotations prevent subsequent runs extension introduces annotations deal control flow approach relies analyzing cfgs require annotations secure sme another approach enforcing noninterference runtime conceptually one executes code security level like low high following constraints high inputs replaced default values low execution low outputs permitted low execution modification semantics forces even unsafe scripts adhere flowfox demonstrates sme context web browsers executing script multiple times prohibitive security lattice multiple levels writes dom considered publicly visible output tainting allows persisting security label dom elements also unclear declassification may integrated sme austin flanagan introduce notion faceted values simulate multiple executions one run keep values every variable corresponding security levels values used computation program proceeds mechanism enforces restricting leak high values low observers browsers work reactively input fed event queue processed time input one event produce output influences input subsequent event bohannon present formalization reactive system compare several definitions reactive bielova extend reactive browser model based sme currently approach supports reactive extend work reactive setting next step finally featherweight firefox presents formal model browser based reactive model resembles bohannon instantiates consumer producer states model actual browser objects like window page cookie store mode connection etc current work entirely focuses formalization engine taint tracking monitor information leaks believe two approaches complement plan integrate model future holistic enforcement mechanism spanning dom browser components background provide brief overview basic concepts dynamic enforcement information flow control ifc dynamic ifc language runtime instrumented carry security label taint every value taint element lattice upper bound security levels entities influenced computation led value simplicity exposition use throughout paper lattice low public high secret partially leaked secret readers may ignore instrumentation works general powerset lattice whose individual elements web domains write value tagged label information flows categorized explicit implicit explicit flows arise result variables assigned others primitive operations instance statement causes explicit flow values explicit flows handled runtime updating label computed value example least upper bound labels operands computation example implicit flows arise control dependencies example program implicit flow final value value iff handle implicit flows dynamic ifc systems maintain label label upper bound labels values influenced control flow thus far last example value label within branch executed final value inherits label also hence label also alone prevent information leaks ends ends since distinguished public attacker program leaks value despite correct propagation implicit taints formally instrumented semantics far fail standard property problem resolved nsu check prohibits assignment variable high recovers adversary observe program termination example program terminates instruction gets stuck due nsu two outcomes deemed observationally equivalent low adversary determine whether program terminated second case hence program deemed secure roughly program two terminating runs program starting heaps heaps look equivalent adversary end heaps like sound dynamic ifc approaches instrumentation renders program cost modifying semantics programs leak information design challenges insights solutions implement dynamic ifc widely used webkit engine instrumenting webkit bytecode interpreter webkit bytecode generated compiler goal modify compiler forced make slight changes make compliant instrumentation modification explained section nonetheless almost work limited bytecode interpreter webkit bytecode interpreter rather standard stack machine several additional data structures features like scope chains variable environments prototype chains function objects local variables held registers call stack instrumentation adds label data structures including registers object properties scope chain pointers adds code propagate explicit implicit taints implements permissive variant nsu check label word size currently bits bit represents taint distinct domain like join labels simply bitwise unlike ecmascript specification semantics actual implementation treat scope chains variable environments like ordinary objects consequently model instrument taint propagation data structures separately working bytecode also leads several interesting conceptual implementation issues taint propagation well interesting questions threat model explain section issues quite general apply beyond example combine dynamic analysis bit static analysis handle unstructured control flow exceptions threat model compiler assumptions explain threat model following standard practice adversary may observe values heap generally adversary level lattice observe heap values labels however allow adversary directly observe internal data structures like call stack scope chains consistent actual interfaces browser scripts access proofs must also show internal data structures across two runs get right induction invariants assuming inaccessible adversary allows permissive program execution explain section bytecode interpreter executes shared space browser components assume components leak information side channels copy heap data secret public locations also applies compiler assume compiler functionally correct trivial errors compiler omitting bytecode could result leaky program even source code information leaks ifc works compiler output compiler errors concern formally assume compiler unspecified deterministic function program compile call stack heap assumption also matches compiler works within webkit needs access call stack scope chain optimize generated bytecode however compiler never needs access heap ignore information leaks due side channels like timing challenges solutions ifc known difficult due highly dynamic nature working bytecode instead source code makes ifc harder nonetheless solutions many ifc concerns proposed earlier work also apply instrumentation sometimes slightly modified form example every object fixed parent called prototype looked property exist child lead implicit flows object created high context high field missing present prototype accessed later low context implicit leak high problem avoided analysis way prototype pointer child parent labeled child created label value read parent traversing pointer joined label potential information flow problems whose solutions remain unchanged analysis include implicit leaks function pointers handling eval working bytecode leads interesting insights cases even applicable source code analysis languages poses new challenges discuss challenges insights unstructured control flow cfgs avoid overtainting labels important goal implicit flow tracking determine influence control construct ended control flow limited commands straightforward effect control construct ends lexical scope influences control flow leads straightforward upgrading downgrading strategy one maintains stack labels effective top one entering control flow construct like new label equal join labels values construct guard depends previous effective pushed exiting construct label popped unfortunately unclear extend simple strategy control flow constructs exceptions break continue functions occur example consider program break break labeled program leaks value assignment appears guarded indeed upgrading downgrading strategy described ineffective program prior work source code ifc either omits constructs introduces additional classes labels address problems label exceptions label loop containing break continue label function labels restrictive needed code indicated dots example executed irrespective condition first iteration thus need raise checking condition labels programmer annotations support wish modify compiler importantly unstructured control flow serious concern webkit bytecode completely unstructured branches like fact control flow except function calls unstructured bytecode solve problem adopt solution based static analysis generated bytecode maintain control flow graph cfg known bytecodes branch node compute immediate ipd ipd node first instruction definitely executed matter branch taken upgrading downgrading strategy extends arbitrary control flow executing branch node push new label stack along node ipd actually reach ipd pop label authors prove ipd marks end scope operation hence security context operation strategy sound earlier example ipd end loop first break statement assignment fails due nsu check program secure requires dynamic code compilation forced extend cfg compute ipds whenever code either function eval compiled fortunately ipd node cfg lies either function node function earlier latter may happen due exceptions extending cfg affect computation ipds earlier nodes also relies fact code generated eval alter cfg earlier functions call stack actual implementation optimize calculation ipds working explained end solution works forms unstructured control flow including unstructured branches bytecode break continue exceptions source code exceptions synthetic exit nodes maintaining cfg presence exceptions expensive node function catch exception outgoing control flow edge next exception handler means cfg general edges going function depend calling context ipds nodes function must computed every time function called moreover case recursive functions nodes must replicated every call rather expensive ideally would like build function cfg function compiled work would exceptions explain attain goal design every function may throw unhandled exception special synthetic exit node sen placed regular return node function every node whose exception caught within function outgoing edge sen traversed exception thrown semantics sen described correctly transfer control appropriate exception handler eliminate edges cfgs become cfg function computed function compiled never updated implementation build two variants cfg depending whether exception handler call stack improves efficiency explain later control flows sen function returns normally exception thrown handled within function unhandled exception occurred within function sen transfers control caller record whether unhandled exception occurred unhandled exception occurred sen triggers special mechanism searches call stack backward first appropriate exception handler transfers control exceptions indistinguishable need find first exception handler importantly pop frame contains first exception handler pop ensures code exception handler ipd executes sen indeed semantics one would expect cfg edges exceptions prevents information leaks function handle possible exception exception handler call stack bytecodes could potentially throw exception sen one successor cfg branching bytecode thus need push according security label condition however push new entry ipd current node ipd top problem solution particular apply dynamic ifc analysis languages exceptions functions optimization ipd current node sen case real ipd outside method already semantics emulate effect exception edges illustration consider following two functions end denotes sen note edge throw throw handled within denotes ipd handler catch function function throw return try catch return clear absence instrumentation invoked two functions together leak value assumed label return value show sen mechanism prevents leak invoking know exception function depending outcome method call either jump exception handler continue based branch push current ipd executing condition push merely update top element control reaches without exception ipd point returns control thus lowered ends return value control reaches unhandled exception point following semantics sen find exception handler catch invoke point exception consequently nsu prevents assignment makes program wish replicate cfg function every time called recursively need method distinguish node corresponding two different recursive calls pushing ipd onto pair pointer current since pointer unique recursive call cfg node paired identifies unique merge point real control flow graph practice even cfg quite dense many bytecodes potentially throw exceptions hence edges avoid overtainting perform crucial optimization exception handler call stack create sen corresponding edges potentially bytecodes safe potentially thrown exception terminate program instantly satisfies ensure exception message visible attacker whether exception handler exists easily tracked using stack booleans mirrors design overlay stack adding extra boolean field entry summary entry quadruple containing security label node intraprocedural cfg pointer boolean value combination sens design allows work intraprocedural cfgs computed function compiled improves efficiency check changes standard nsu check halts program execution whenever attempt made assign variable value high earlier example assuming stores value program execution halted command austin flanagan sequel observe may overly restrictive fact observable effects may overwritten constant immediately propose propagating special taint called instruction halting program tries use value labeled way observable call special taint partially leaked idea called check allows program execution nsu would adopt fact additional permissiveness absolutely essential webkit compiler often generates dead assignments within branches execution would pointlessly halt standard nsu used differ constitutes use value labeled expected treat occurrence guard branch use thus program halted command obtains taint assignment program halted leaks however allow values flow heap consider program program insecure model heap location accessible adversary ends deem program secure assuming value label value particular however definition dynamic analysis virtually impossible enforce adversary access heap outside language writing dynamic analysis determine alternate execution program would written value hence prevent adversary seeing consequently design use modified check call deferred nsu check wherein program halted construct may potentially flow value heap includes branches whose guard contains value assignments whose target heap location whose source however constrain flow values data structures invisible adversary model local registers variable environments design critically relies treating internal data structures differently ordinary objects case instance ecmascript specification formal model ifc formally model webkit bytecode semantics bytecode interpreter instrumentation dynamic ifc prove ins prim dst mov dst src jfalse cond target offset target offset typeof dst src instanceof dst value cprot enter ret result end result call func args res func args dst dst func dst construct func args dst dst dst base prop base prop value direct dst base prop dst base size breaktarget offset dst base size iter target offset base prop getter setter resolve dst prop dst prop skip dst prop dst prop isstrict bool bdst pdst prop dst index skip index skip value scope count target offset throw catch fig instructions insensitive programs executed instrumented interpreter model construction cfg computation ipds standard keep presentation accessible present formal model somewhat abstraction details resolved technical appendix bytecode data structures version webkit model uses total bytecodes instructions model remaining bytecodes redundant perspective formal modeling specializations wrappers bytecodes improve efficiency syntax bytecodes model shown fig bytecode prim abstractly represents primitive binary unary first two arguments operations behave similarly convenience divide bytecodes primitive instructions instructions related objects prototype chains instructions related functions instructions related scope chains instructions related exceptions bytecode form arguments instruction form hvari htypei var variable name type one following bool prop offset register constant integer constant boolean identifier property name jump offset value respectively webkit bytecode organized code blocks code block sequence bytecodes line numbers corresponds instructions function eval statement code block generated function created eval executed instrumentation perform control flow analysis code block created formal model abstractly represent code block cfg written formally cfg directed graph whose nodes bytecodes whose edges represent possible control flows edges cfg also records ipd node ipds computed using algorithm lengauer tarjan cfg created cfg contains uncaught exceptions also create cfg node succ denotes unique successor conditional branching node left right denote successors condition true false respectively bytecode interpreter standard stack machine support features like scope chains prototype chains state machine instrumentation quadruple represents current node executed represents heap represents assume abstract countable set heap locations references objects heap partial map locations objects object may ordinary object containing properties named map labeled values prototype field points parent heap location two labels records object created structure label upper bound pcs influenced fields exist function object ordinary object cfg corresponds function stored object scope chain closing context function labeled value value paired security label value model may heap location primitive value includes integers booleans regular expressions arrays strings special values undefined null contains one incomplete function call contains array registers local variables cfg function represented return address node cfg previous frame pointer allows access variables outer scopes additionally exception table maps potentially bytecode function exception handler within function surrounds bytecode exception handler exists points sen function conservatively assume unknown code may throw exception bytecodes call eval purpose denotes size top frame register contains labeled value scope chain sequence scope chain nodes scns denoted paired labels webkit scope chain node may either object variable environment array labeled values thus field parent object prototype field function object ordinary property also actual model fields map general property descriptors also contain attributes along labeled values elide attributes keep presentation simple entry triple security label node cfg pointer call stack simplicity ignore fourth boolean field described section presentation enter new control context push new together ipd entry point control context pointer current pair uniquely identifies control context ends necessary distinguish branch point different recursive calls function semantics use isipd pop stack takes current instruction current call stack returns new isipd otherwise explained section optimization push new node onto ipd differs corresponding pair top stack handle exceptions correctly also require sen otherwise join label top stack formalized function whose obvious definition elide pair syntactic entity security label write entity label particular semantics ifc cfgs present semantics faithfully models implementation using cfgs sens semantics defined set state transition rules define judgment fig shows rules selected bytecodes reasons space omit rules bytecodes formal descriptions like opcall used rules shorthand else prim reads values two registers performs binary operation generically denoted values writes result register dst dst assigned join labels head implement deferred nsu section existing label dst compared current label lower label dst joined note premise isipd pops entry ipd matches new program node premise occurs semantic rules jfalse conditional jump skips offset number successive nodes cfg register cond contains false else next node formally node branches either right left cfg accordance deferred nsu operation performed cond labeled jfalse also starts new control context new node pushed top label join cond current label top stack unless ipd branch point already top stack sen case join new dst dst prim dst dst succ isipd cond target offset cond cond cond false left right ipd isipd jfalse scope pushscope scope succ isipd func args func opcall func args func ipd isipd call ret base prop value direct value direct true putdirect base prop value putindirect base prop value succ isipd res opret res isipd throw excvalue throwexception isipd fig semantics selected rules label previous traversed bottom top always monotonically labels updates property prop object pointed register base explained section allow value written labeled flag direct states whether traverse prototype chain finding property set compiler optimization flag true chain traversed putdirect handles case direct false chain traversed putindirect importantly chain traversed resulting value labeled join prototype labels structure labels traversed objects standard necessary prevent implicit leaks pointers structure changes objects corresponds start construct obj pushes object pointed register scope scope chain pushing object scope chain implicitly leak information program context later also label nodes added chain deferred nsu applies scope chain pointer registers call invokes function target object stored register func due deferred nsu call proceeds func call creates new initializes arguments scope chain pointer initialized function object field cfg return node new frame cfg copied function object pointed func formalized opcall whose details omit call branch instruction pushes new label join current func structure label function object unless ipd current node sen already top case join new previous call also initializes new registers labels new separate bytecode shown executed first called function sets register values undefined eval similar call code executed also compiled ret exits function returns control caller formalized opret return value written interpreter variable throw throws exception passing value register argument exception handler push semantics ensure exception handler present pointed top throwexception pops transfers control exception handler looking exception table exception value register transferred handler interpreter variable semantics bytecodes described section correctness ifc prove ifc analysis guarantees terminationinsensitive intuitively means program run twice two states observationally equivalent adversary executions terminate two final states also equivalent adversary state theorem formally formalize equivalence various data structures model nonstandard data structure use cfg graph equality suffices complication low heap locations allocated two runs need identical adopt standard solution parametrizing definitions equivalence partial bijection heap locations idea two heap locations related partial bijection created corresponding allocations two runs define rather standard relation means states left right equivalent observer level bijection heap locations details presented section theorem suppose hend hend implementation instrumented webkit engine javascriptcore implement ifc semantics previous section function starts executing generate cfg calculate ipds nodes static analysis bytecode modify compiler emit slightly different functionally equivalent bytecode sequence finally blocks needed accurate computation ipds evaluation purposes label source script script domain origin seen domain dynamically allocated bit label general instrumentation terminates script violates normalized time interpreter jit basic mized access bitops crypto date math regexp string sunspider tests fig overheads basic optimized ifc sunspider benchmarks ifc however purpose evaluating overhead instrumentation ignore ifc violations experiments described also implement evaluate variant sparse labeling optimizes common case computations mostly use local variables registers bytecode function reads value heap label different propagate taints computations point registers assumed implicitly tainted simple optimization reduces overhead incurred taint tracking significantly microbenchmarks basic optimized version instrumentation adds approximately lines code webkit baseline evaluation uninstrumented interpreter jit disabled comparison also include measurements jit enabled experiments based webkit build running safari machine intel xeon processor ram runs mac version microbenchmark executed standard sunspider benchmark suite uninstrumented interpreter jit disabled jit enabled basic optimized ifc instrumentations jit disabled results shown figure ranges sunspider tests shows average execution time normalized baseline uninstrumented interpreter jit disabled averaged across runs error bars standard deviations although overheads ifc vary test test average overheads baseline basic ifc optimized ifc respectively test regexp almost zero overhead spends time native code yet instrumented also note expected configuration performs extremely well sunspider benchmarks normalized javascript time interpreter jit basic instrumentagon opgmized instrumentagon google yahoo amazon wikipedia ebay websites bing linkedin live twi fig overheads basic optimized ifc real websites macrobenchmarks measured execution time intial popular english language websites load website safari measure total time taken execute code without user interaction excludes time network communication internal browser events establishes conservative baseline results normalized baseline shown fig overheads less average around instrumentations interestingly observe optimization less effective real websites indicating real accesses heap often sunspider tests compared amount time takes fetch page network render overheads negligible enabling jit worsens performance compared baseline indicating code executed jit useful also experimented jsbench sophisticated benchmark derived code wild average overhead jsbench tests total iterations approximately instrumentations average time running benchmark tests uninstrumented interpreter jit disabled standard deviation mean average time running benchmark tests instrumented interpreter optimized version respectively standard deviation mean two cases conclusion future work explored dynamic information flow control bytecode webkit production engine formally model bytecode semantics instrumentation prove latter correct identify challenges largely arising pervasive use unstructured control flow bytecode resolve using limited static analysis evaluation indicates moderate overheads practice ongoing work instrumenting dom native methods also plan generalize model theorem take account reactive nature web browsers going beyond noninterference design implementation policy language representing allowed information flows looks necessary acknowledgments work funded part deutsche forschungsgemeinschaft dfg grant information flow control browser clients priority program reliably secure software systems german federal ministry education research bmbf within centre privacy accountability cispa saarland university references richards hammer burg vitek eval men study use eval javascript applications mezzini ecoop volume lncs jang jhala lerner shacham empirical study privacyviolating information flows javascript web applications proc acm conference computer communications security richards hammer zappa nardelli jagannathan vitek flexible access control javascript proc acm sigplan international conference object oriented programming systems languages applications oopsla hedin sabelfeld security core javascript proc ieee computer security foundations symposium hedin birgisson bello sabelfeld jsflow tracking information flow javascript apis proc acm symposium applied computing devriese piessens noninterference secure proc ieee symposium security privacy groef devriese nikiforakis piessens flowfox web browser flexible precise information flow control proc acm conference computer communications security goguen meseguer security policies security models proc ieee symposium security privacy myers liskov decentralized model information flow control proc acm symposium operating systems principles zdancewic myers robust declassification proc ieee computer security foundations workshop volpano irvine smith sound type system secure flow analysis comput secur january cleary shirley hammer information flow analysis javascript proc acm sigplan international workshop programming language systems technologies internet clients austin flanagan permissive dynamic information flow analysis proc acm sigplan workshop programming languages analysis security bohannon pierce weirich zdancewic reactive noninterference proc acm conference computer communications security maffeis mitchell taly operational semantics javascript proc asian symposium programming languages systems aplas guha saftoiu krishnamurthi essence javascript proc european conference programming politz carroll lerner pombrio krishnamurthi tested semantics getters setters eval javascript proceedings dynamic languages symposium bodin chargueraud filaretti gardner maffeis naudziuniene schmitt smith trusted mechanised javascript specification proc acm symposium principles programming languages guarnieri pistoia tripp dolby teilhet berg saving world wide web vulnerable javascript proc international symposium software testing analysis issta chugh meister jhala lerner staged information flow javascript proc acm sigplan conference programming language design implementation austin flanagan efficient information flow analysis proc acm sigplan fourth workshop programming languages analysis security zdancewic programming languages information security phd thesis cornell university august birgisson hedin sabelfeld boosting permissiveness dynamic tracking testing computer security esorics volume lncs springer berlin heidelberg austin flanagan multiple facets dynamic information flow proc annual acm symposium principles programming languages bielova devriese massacci piessens reactive browser model international conference network system security nss bohannon pierce featherweight firefox formalizing core web browser proc usenix conference web application development webapps denning lattice model secure information flow commun acm may dhawan ganapathy analyzing information flow browser extensions proc annual computer security applications conference acsac robling denning cryptography data security longman publishing boston usa xin zhang efficient online detection dynamic control dependence proc international symposium software testing analysis masri podgurski algorithms tool support dynamic information flow analysis information software technology lengauer tarjan fast algorithm finding dominators flowgraph acm trans program lang syst january richards gal eich vitek automated construction javascript benchmarks proceedings acm international conference object oriented programming systems languages applications appendix data structures formal model described section typechecked various data structures used defining functions used semantics language given figure javascript program represented structure containing source boolean flag indicating strict mode set instruction indicated structure consisting opcode list operands opcode string indicating operation operand union registerindex immediatevalue identifier boolean funcindex offset immediatevalue denotes directly supplied value opcode registerindex index register containing value operated upon identifier represents string name directly used opcode boolean often flag indicating truth value parameter offset represents offset control jumps similarly functionindex indicates index function object invoked function source code represented form control flow graph cfg formally defined struct list cfg nodes contain instructions performed edges point next instruction program multiple outgoing edges indicate branching instruction also contains variables indicating number variables used function code reference globalobject labels interpreted structure consisting long integer label label represents value label interpreted bit vectors special label star represents partially leaked data used deferred check program counter represented stack contains context label ipd operation pushed node callframe current node handler flag indicating presence exception handler different types values used operands performing operations include boolean integer string double objects special values like nan undefined values associated label wrapped jsvalue class values used data structures type jsvalue objects consist properties prototype chain pointer associated label structure label object properties represented structure propertyname descriptor descriptor property contains value boolean flags property label struct sourcecode string programsrc bool strictmode struct jsvalue valuetemplate data jslabel label typedef char opcode union operand int immediatevalue string identifier int registerindex int funcindex bool flag int offset struct instruction opcode opc operand opr struct cfgnode instruction inst struct cfgnode left struct cfgnode right struct cfgnode succ struct propertydescriptor jsvalue value bool writable bool enumerable bool configurable jslabel structlabel struct property string propertyname propertydescriptor pdesc struct propertyslot property prop propertyslot next struct register jsvalue value struct callframenode register cfg cfg cfgnode returnaddress scopechainnode jsfunctionobject callee jslabel calleelabel int argcount bool getter int dreg struct callframestack callframenode cfn callframestack previous struct pcnode jslabel struct jsobject cfgnode ipd property property callframenode cfn struct proto bool handler jslabel struct cfg jsobject struct cfgnode cfgnode prototype struct pcstack jsglobalobject globalobject jslabel structlabel pcnode node int numvars pcstack previous int numfns bool strictmode struct heap unsigned location struct jsactivation jsobject callframenode callframenode struct jslabel jslabel structlabel label enum functiontype jsfunction hostfunction enum scopechainobjecttype enum specials lexicalobject variableobject nan undefined struct jsfunctionobject jsobject union schainobject union valuetype cfg funccfg jsobject obj bool scopechainnode scopechain jsactivation actobj int functiontype ftype string double struct scopechainnode jsobject struct jsglobalobject schainobject object jsobject scopechainobjecttype scobjtype jsfunctionobject evalfunction scopechainnode next union valuetemplate jsobject objectprototype jslabel scopelabel specials jsobject functionprototype valuetype fig data structures heap collection objects associated memory address essentially map location object subtypes jsobject define function object global object function object contains pointer associated cfg scope chain also contains field defining type function represents namely host made various nodes contains set registers associated cfg return address function pointer scope chain exception table registers store values objects used operands performing operations exception table contains details handlers associated different instructions cfg scope chain list nodes containing objects activation objects along label indicating context object node added activation object structure containing pointer node activation object created next section defines different procedures used semantics language statement stop implies program execution hangs algorithms different used semantics presented section described procedure isinstanceof jslabel context jsvalue obj jsvalue protoval oproto oproto oproto protoval ret jsvalue true context return ret end oproto context end ret jsvalue false context return ret end procedure procedure opret callframestack callstack int ret jsvalue retvalue ret hostcallframeflag return nil callstack retvalue end return callstack retvalue end procedure procedure opcall callframestack callstack cfgnode int func int argcount jsvalue funcvalue func jsfunctionobject fobj callframenode sigmatop new callframenode callframenode prevtop sigmatop calltype calltype getcalldata funcvalue fobj calltype calltypejs scopechainnode argcount argcount argcount end else calltype calltypehost stop end callstack return retstate end procedure modeled procedure opcalleval jslabel contextlabel callframestack callstack cfgnode int func int argcount jsvalue funcvalue func jsfunctionobject fobj jsobject variableobject argument arguments ishosteval funcvalue scopechainnode argcount sourcecode progsrc compiler progsrc cfg evalcodeblock compiler progsrc unsigned numvars unsigned numfuncs numvars numfuncs jsactivation variableobject new jsactivation callstack schainobject scobj variableobject scobj variableobject contextlabel else scopechainnode variableobject break end end end numvars identifier iden iden iden end end numfuncs jsfunctionobject fobj fobj end end evalcodeblock callstack return retstate else return opcall contextlabel callstack func argcount end end procedure procedure createarguments heap callframestack callstack jsobject jsargument jsargument callstack jsargument jsvalue jsargument return retstate end procedure procedure newfunc callframestack callstack heap heap int funcindex jslabel context cfg cblock sourcecode fccode funcindex cfg fcblock compiler fccode jsfunctionobject fobj jsfunctionobject fcblock context fobj heap jsvalue fobj return retstate end procedure procedure createactivation callframestack callstack jslabel contextlabel jsactivation jsactivation new jsactivation callstack contextlabel schainobject scobj jsactivation jsvalue vactivation jsvalue jsactivation contextlabel scobj variableobject contextlabel contextlabel else stop end return retstate end procedure procedure createthis jslabel contextlabel callframestack callstack heap jsfunctionobject callee propertyslot callee string str prototype jsvalue proto str jsobject obj new jsobject contextlabel contextlabel obj jsvalue obj return retstate end procedure procedure newobject heap jslabel contextlabel jsobject obj emptyobject contextlabel objectprototype contextlabel obj jsvalue obj return retstate end procedure procedure getpropertybyid jsvalue string int dst jsobject jslabel label jsvalue ret jsundefined label return ret end null jsvalue jsfunctionobject funcobj jsfunctionobject callframenode sigmatop new callframenode sigmatop scopechainnode cfg newcodeblock newcodeblock true dst callstack else ret getproperty label end return ret else end label label end end procedure procedure putdirect jslabel contextlabel callframestack callstack heap int base string property int propval jsvalue basevalue base value jsvalue propvalue propval value jsobject obj propertydescriptor datapd propertydescriptor true true true propvalue property datapd contextlabel obj return end procedure procedure putindirect jslabel contextlabel callframestack callstack heap int base string property int val jsvalue basevalue base jsvalue propvalue val jsobject obj bool isstrict contextlabel contextlabel property obj getproperty property isstrict property propvalue obj return end return putdirect contextlabel callstack base property val end procedure procedure delbyid jslabel contextlabel callframestack callstack heap int base identifier property jsvalue basevalue base jsobject obj int loc obj property prop property propertydescriptor prop contextlabel property jsvalue true return retstate end property prop isconfigurable jsvalue property loc obj jsvalue true return retstate end end jsvalue false return retstate else stop end end procedure procedure putgettersetter jslabel contextlabel callframestack callstack heap int base identifier property jsvalue gettervalue jsvalue settervalue jsvalue basevalue base jsobject obj int loc obj jsfunctionobject getterobj setterobj jsfunctionobject getterfuncobj null setterfuncobj null getterfuncobj end setterfuncobj end getterfuncobj null property getterobj end setterfuncobj null setterobj end propertydescriptor accessor propertydescriptor false false false true jsvalue jsvalue contextlabel property accessor contextlabel loc obj return end procedure procedure getpropnames callframestack callstack instruction int base int int size int breakoffset jsvalue baseval base jsobject obj propertyiterator propitr jsundefined jsundefined jsundefined breakoffset return retstate end jsvalue propitr jsvalue jsvalue return retstate end procedure procedure getnextpropname callframestack cstack instruction jsvalue base int int size int iter int offset int dst jsobject obj propertyiterator propitr iter topropertyiterator int rfile int rfile size string key jsvalue jsvalue key offset break end end return retstate end procedure procedure resolveinsc jslabel contextlabel scopechainnode scopehead string property jsvalue jslabel scopechainnode scn scopehead scn null propertyslot pslot property property contextlabel return end scn variableobject contextlabel else lexicalobject contextlabel end contextlabel scn scopenextlabel end jsundefined contextlabel return end procedure procedure resolveinscwithskip jslabel contextlabel scopechainnode scopehead string property int skip jsvalue jslabel scopechainnode scn scopehead scn variableobject contextlabel else lexicalobject contextlabel end contextlabel scn scopenextlabel end scn null propertyslot pslot property property contextlabel return end scn variableobject contextlabel else lexicalobject contextlabel end contextlabel scn scopenextlabel end jsundefined contextlabel return end procedure procedure resolveglobal jslabel contextlabel callframestack cstack string property jsvalue struct cfg cblock jsglobalobject globalobject cblock getglobalobject propertyslot pslot globalobject property property contextlabel return end jsundefined contextlabel return end procedure procedure resolvebase jslabel contextlabel callframestack cstack scopechainnode scopehead string property bool strict jsvalue scopechainnode scn scopehead cfg cblock jsglobalobject gobject scn null jsobject obj contextlabel contextlabel propertyslot pslot obj null strict property emptyjsvalue contextlabel return end property jsvaluecontainingobject obj contextlabel return end scn scn null contextlabel scn scopenextlabel end end jsvalue gobject contextlabel return end procedure procedure resolvebaseandproperty jslabel contextlabel callframestack cstack int bregister int pregister string property jsvalue scopechainnode scn scn null jsobject obj contextlabel contextlabel propertyslot pslot obj property property contextlabel jsvaluecontainingobject obj contextlabel return ret end scn scn null contextlabel scn scopenextlabel end end end procedure procedure getscopedvar jslabel contextlabel callframestack callstack heap int index int skip jsvalue scopechainnode scn variableobject contextlabel structlabel else lexicalobject contextlabel structlabel end contextlabel scn scopelabel scn end index variableobject structlabel else lexicalobject structlabel end return end procedure procedure putscopedvar jslabel contextlabel callframestack callstack heap int index int skip int value callframestack cstack scopechainnode scn jsvalue val value variableobject contextlabel else lexicalobject contextlabel end contextlabel scn scopelabel scn end cstack contextlabel index val return cstack end procedure procedure pushscope jslabel contextlabel callframestack callstack heap int scope scopechainnode jsvalue scope jsobject schainobject scobj contextlabel scobj lexicalobject contextlabel else star scobj lexicalobject star end return callstack end procedure procedure popscope jslabel contextlabel callframestack callstack heap scopechainnode jslabel contextlabel else stop end return callstack end procedure procedure jmpscope jslabel contextlabel callframestack callstack heap int count scopechainnode jslabel contextlabel else stop end end return callstack end procedure procedure throwexception callframestack callstack cfgnode iota cfgnode handler end end handler iota handler callstack end procedure semantics prim dst dst prim dst dst succ isipd prim reads values two registers performs binary operation generically denoted writes result dst register label assigned value dst register join label value head order avoid implicit leak information label existing value dst compared current context label label lower context label label value dst set mov mov dst src src src dst dst dst succ isipd mov copies value src register dst register label assigned value dst register join label value src head order avoid implicit leak information label existing value dst compared current context label label lower context label label value dst joined jfalse jfalse cond target offset cond cond ipd false cond false left right isipd jfalse branching instruction based value cond register decides branch take operation performed value cond labelled contains terminate execution prevent possible leak information push function defined rule following node pushed top containing ipd branching instruction label value cond joined context define context branch ipd instruction sen top join label top context label determined cond register target offset left right ipd false isipd another branching instruction value less jumps target else continues next instruction operation performed values labelled one contains abort execution prevent possible leak information push function defined rule following node pushed top containing ipd branching instruction join label values joined context define context branch ipd instruction sen top join label top context label determined typeof typeof dst src src determinetype src dst dst succ dst isipd typeof determines type string src according ecmascript rules puts result register dst deferred nsu check dst writing result determinetype function returns data type value passed parameter instanceof dst value cprot isinstanceof value cprot dst instanceof dst dst succ isipd instanceof tests whether cprot prototype chain object register value puts boolean result dst register deferred nsu check enter enter succ isipd enter marks beginning code block ret ret res opret res isipd ret last instruction executed function pops returns control callee return value function written local variable interpreter read next instruction executed end end res opend res end marks end program opend passes value present res register caller native function invoked interpreter call func args func opcall func args func ipd isipd call initially checks function object label label contains program execution aborted reason termination possible leak information explained call creates new copies arguments initializes registers pointer codeblock return address registers initialized undefined assigned label obtained joining label context function created label function object treat call branching instruction hence push new node top label determined along ipd field push function determined looking exception table contains associated exception handler sets field true else set false ipd sen join label top stack currently calculated label points instruction pointer first instruction new code block res res res succ res isipd copies return value res register label assigned value res register join label return value head order avoid implicit leak information deferred performed func args func opcalleval func args func ipd isipd calls function string passed argument converted code block func register contains original global eval function performed local scope else similar call dst createarguments dst dst dst succ isipd creates arguments object places pointer local dst register deferred nsu check label arguments object set context dst funcindex newfunc funcindex dst dst dst succ isipd constructs new function instance function funcindex current scope chain puts result dst deferred nsu check dst createactivation dst dst dst succ isipd creates activation object current already created writes dst deferred nsu check pushes object label head existing less context label pushed node set else set context construct construct func args func opcall func args func ipd isipd construct invokes register func constructor similar call javascript functions object passed first argument list arguments new object host constructors passed dst createthis dst dst dst succ isipd creates allocates object used construction later function object labelled context placed dst deferred nsu check prototype chain pointer also labelled context label dst newobject dst dst dst succ isipd constructs new empty object instance puts dst deferred nsu check object labelled context label prototype chain pointer also labelled context dst base prop vdst getpropertybyid base prop vdst dst dst dst succ isipd gets property named identifier prop object base register puts dst register deferred nsu check object contain property looks prototype chain determine proto objects contain property traversing prototype chain context joined structure label objects prototype chain pointer labels property found end chain joins property label context property found returns undefined joined label context label property put dst register property accessor property calls getter function sets getter flag updates destination register field register value inserted transfers control first instruction getter function base prop value direct value direct true putdirect base prop value putindirect base prop value succ isipd writes heap property object check label value register contains program aborts could potentially result implicit information flow writes property object basic functionality search property object prototype chain change property found new property current object property label context created based whether property object needs created object prototype chain object calls putdirect putindirect respectively dst base prop base delbyid base prop dst dst dst succ isipd deletes property specified prop object contained base structure label object less context deletion happen property found property deleted boolean value true written dst else writes false dst label boolean value structure label object joined property label base prop getter setter getter setter putgettersetter base prop getter setter succ isipd puts accessor descriptor object register base initially checks structure label object greater equal context property accessor properties added given register prop property label accessor functions set context putgettersetter calls putindirect internally sets property object specified value dst base size breaktarget offset base getpropnames base size breaktarget base dst size dst dst size size undefined base base base prop base ipd false isipd creates property name list object register base puts dst initializing size iteration list deferred nsu check base undefined null jumps breaktarget branching instruction pushes label join property labels structure label object along ipd ipd instruction sen top join label top context label determined dst base size iter target offset getnextpropnames base size iter target dst dst dst isipd copies next name property name list created getpnames iter dst deferred nsu check jumps target names left continues next instruction although behaves branching instruction context pertaining opcode already pushed also ipd corresponding instruction one determined thus push instruction resolve resolve dst prop resolveinsc prop dst dst dst succ isipd resolve searches property scope chain writes dst register found label property written dst join context label nodes structure label object contained traversed scope chain label associated pointers chain node object property found initial label value contained dst lower context label label value dst joined case property found instruction throws exception similar throw described later dst prop skip resolveinscwithskip prop skip dst dst dst succ isipd looks property named prop scope chain similar resolve skips top skip levels writes result register dst property found also raises exception behaves similarly resolve dst prop resolveglobal prop dst dst dst succ isipd looks property named prop global object structure global object matches one passed looks global object else falls back perform full resolve dst prop isstrict bool resolvebase prop isstrict dst dst dst succ isipd looks property named prop scope chain similar resolve writes object register dst property found isstrict false global object stored dst bdst pdst prop bdst pdst resolvebaseandproperty basedst propdst prop bdst bdst pdst pdst bdst pdst bdst bdst succ pdst pdst isipd looks property named prop scope chain similar writes object register bdst also writes property pdst property found raises exception like resolve dst index skip getscopedvar index skip dst dst dst succ isipd loads contents index local scope chain skipping skip nodes places dst deferred nsu label value dst includes join current context along structure label objects skipped nodes index skip value value putscopedvar index skip value succ isipd puts contents value index local scope chain skipping skip nodes label value includes join current context along structure label objects skipped nodes scope pushscope scope succ isipd converts scope object pushes onto top current scope chain contents register scope replaced created object scope chain pointer label set context popscope succ isipd removes top item current scope chain scope chain pointer label greater equal context count target jmpscope count succ isipd removes top count items current scope chain scope chain pointer label greater equal context jumps offset specified target throw throw excvalue throwexception isipd throw throws exception points exception handler next instruction executed exception handler might function earlier function present program terminates exception handler edge synthetic exit node apart throwexception pops reaches containing exception handler writes exception value local interpreter variable excvalue read catch excvalue catch catch excvalue succ excvalue empty isipd catch catches exception thrown instruction whose handler corresponds catch block reads exception value excvalue writes register label register less context joined label makes excvalue empty proceeds execute first instruction catch block proofs results fields frame denoted following symbols represents ipd field top frame returns label field top frame returns field top frame definitions proofs follow assume level attacker lattice presented earlier equivalence relation level attacker omitted clarity purposes definitions proofs definition partial bijection partial bijection binary relation heap locations satisfying following properties using partial bijections define equivalence values labeled values objects definition value equivalence two values equivalent written either primitive value definition labeled value equivalence two labeled values equivalent written one following holds first clause definition standard check equates partially leaked value every labeled value objects formally denoted flags correspond property name respective values flags represent writable enumerable configurable flags described propertydescriptor structure cpp model current model allow modification flags always set true thus need account flags equivalence definition represents labelled pointer object prototype definition object equivalence ordinary objects flags flags say iff either following hold particular function objects say iff either equality nodes cfgs means portions cfgs reachable equal modulo renaming operands bytecodes equivalence scope chains defined allow flow heaps need corresponding clauses definition object equivalence definition heap equivalence two heaps say iff unlike objects allow permeate scope chains definition scope chain equivalence must account scope chains denoted node contains label along object either jsactivation jsobject represented definition scope chain equivalence two scope chain nodes say one following holds equivalence two scope chains defined following rules nil nil nil nil one following holds definition equivalence two call frames say iff registers registers note register simply labeled value semantics clause definition equivalence two say iff corresponding nodes label equal except field proofs follow two nodes equal respective fields equal except field definition equivalence given suppose lowest node lowest node node pointed node pointed prefix including empty prefix including empty iff definition state equivalence two states equivalent written iff lemma confinement lemma proof labelled nodes remain unchanged branching instructions pushing new node would label due monotonicity even ipd corresponding would pop labelled node thus labelled nodes remain unchanged hence assume first node labelled context stack higher labelled nodes first node labelled corresponding nodes label remain hence case analysis instruction type prim dst dst premise prim dst definition dst dst dst dst contain definition dst dst dst changes definition also remain unchanged definition thus mov similar prim jfalse similar jfalse typeof similar prim instanceof similar prim enter ret alse popped unchanged true sets res let prefix changes effect callframe equivalence cases give definition definition end confinement lemma apply call pushes top lowest node joins label labelled nodes remain unchanged remain unchanged definition similar prim eval similar call strict mode pushes node label else labels mode push node remains equivalent corresponding definition unchanged definition initial definition argument object created step taken similar prim initial definition function object created step taken similar prim initial definition argument objects created step taken puts object dst label depending dst value initial label also pushes node containing object scope chain label nil thus definition definition unchanged definition construct similar call similar initial definition new object created step taken similar prim similar mov property data property property accessor property getter invoked invocation getter pushes entry top remains lowest node joins label labelled nodes remain unchanged remain unchanged definition sets property object base object value label structure label object thus object remains lowequivalent definition thus definition also deletes property structure label object thus object remains definition definition similar mov sets accessor property object base object getter setter label structure label object thus object remains definition thus definition also similar mov jfalse similar mov resolve property exists similar mov similar throw similar resolve similar resolve similar resolve similar resolve similar mov writes value indexth register skipth node index index else index index unchanged thus definition pushes node label nil else assigns label thus registers remain unchanged definition unchanged definition pops node registers remain unchanged definition unchanged definition similar throw pops handler reached property ipd ensures either thus ones remain unchanged thus definition catch similar mov corollary proof prove proof induction basis definition labelled nodes equal lemma labelled nodes equal thus labelled nodes equal definition prove basis lemma lowest hlabelled node grows monotonically let pointed lowest node size definition size prefix transitivity equality three cases respectively sizes following conditions hold registers registers registers registers thus registers registers number registers given let represents values registers respectively definition case lemma definition case either case value remains unchanged thus definition following cases arise lemma thus thus definition two scope chains due confinement lemma nil either case definition three one following holds due confinement lemma definition due confinement lemma definition iii due confinement lemma either one hold definition additions scope chain thus definition thus either thus thus thus definition definition corollary proof induction basis definition definition lemma thus identity bijection thus contain ordinary object respective structure labels either definition structure label object thus respective properties respective properties either also thus definition contain function object respective structure labels either definition structure label function object thus structure label function object thus result objects cfgs scope chains corollary thus thus lemma supporting lemma suppose proof every instruction executes isipd end operation ipd corresponding pops first node would either pop runs none thus instructions push branch explain respective instructions proof case analysis instruction type prim new object created src src case analysis definition src src src src src dst dst hence dst dst definition dst dst dst dst definition dst dst dst dst definition symmetrical reasoning dst dst dst dst definition dst changes top thus definition unchanged definition mov similar reasoning prim single source jfalse new object created cond cond cond cond label pushed cond cond label pushed ipd would cfg cases ipd sen join label label obtained runs thus ipd sen node thus ipd field also field false cases thus pushed node cases hence either ipd may may equal similar reasoning jfalse similar mov new object created label value dst label context joined label prototype chain pointers traversed value value structure labels objects pointed value value respectively definition dst dst dst dst definition objects similar properties prototype chains instance none traversed prototype chain objects dst dst false else present true dst dst definition one traversed prototype chain objects dst dst dst dst definition dst changes top thus definition unchanged definition enter new object created ret new object created since two cases arise getter flag alse popped similarly popped changed definition true resgister changes defintion res res res res end call new object created pushes node similar jfalse difference field cfgs associated exception handler set field true runs else false thus node pushed hence unc unc remain unchanged correspond field lowest node definition registers created new contain undefined label function objects implying also return addresses callee unchanged similar move similar strict mode pushes node label pushed nodes thus definition mode push anything similar call thus let argument object created dst dst objects thus definition definition also definition objects lowequivalent let function object created function objects func func dst dst definition thus definition definition also definition objects similar construct similar call similar similar new object created base base either objects properties labelled definition case data property either dst dst dst dst value prop definition dst dst case accessor property dst changes top dst dst since thus definition unchanged definition reasoning similar call new object created value labelled properties created modified label structure labels respective objects become else value labelled properties created modified value label thus objects remain definition hence definition new object created deleted property structure label object dst dst else labelled dst dst value true false depending whether property deleted definition structure labels objects properties definition structure label thus objects remain definition definition reasoning similar new object created base base objects thus structure label object either runs properties values definition ipd cases field field set false thus similar mov done dst size similar mov done dst base resolve new object created property found object node labels also property value object node labels label property thus dst dst definition thus property found runs similar throw property found second run first run property context exception thrown also unchanged similar resolve similar resolve similar resolve similar resolve new object created reads indexth register object skipth node writes dst value labelled else labelled definition dst dst definition thus new object created writes scope chain node value value labelled labelled runs scope chains remain equivalent value checks label register puts value label thus definition definition definition new object created pushes node containing object scope node label scope scope definition registers remain new object created pops node scope chain registers remain similar throw new object created property ipd ensures ones remain unchanged thus definition catch similar mov lemma supporting lemma suppose proof starting instruction high context runs might get two different instructions possible branching instruction first place divergence happened high context prove property ipds know pushes node top originally ipd pops node since start instrucion ipd prove pushes equal nodes ipds lemma get ipd ipd ipd ipd pops correspond nth mth step ipd point pop final node corollary ipd pops node pushed run property ipd ipd would pop first frame labelled thus symmetric case prove lemma get corollary get lemma ipd compare ipd instruction lie comparison suffice let represented respectively represented respectively case analysis different cases definitions show either definition definition definition lets scopechains represent scopechains respective nth mth step two runs node labels scope chain pointers following cases arise nil case either remain nil head label rules instructions modify case case case scopechains remain unchanged case jfalse case base undefined base undefined base base hence base base thus dst size similarly dst size registers remain unchanged thus case know symmetric case prove lemma get corollary lemma assume get object object respective objects nth mth step two runs case analysis different cases definitions show definition definition similarly function objects structure labels would remain originally remain cfgs scopechains case jfalse thus case know symmetric case definition trace trace defined sequence configurations states resulting program evaluation program evaluation corresponding trace given definition trace defined inductively nil nil else theorem suppose two program evaluations respective given proof proof proceeds induction basis assumption prove let lemma lemma corollary suppose hend hend proof empty end steps semantics know context runs would push pop number nodes thus take number steps context let number states context theorem thus hend hend definition | 6 |
energy storage sharing smart grid modified auction based approach dec wayes tushar member ieee chai chau yuen senior member ieee shisheng huang member ieee david smith member ieee vincent poor fellow ieee zaiyue yang member ieee paper studies solution joint energy storage ownership sharing multiple shared facility controllers sfcs dwelling residential community main objective enable residential units rus decide fraction capacity want share sfcs community order assist storing electricity fulfilling demand various shared facilities end modified mechanism designed captures interaction sfcs rus determine auction price allocation shared rus governs proposed joint ownership fraction capacity storage decides put market share sfcs auction price determined noncooperative stackelberg game formulated rus auctioneer shown proposed auction possesses incentive compatibility individual rationality properties leveraged via unique stackelberg equilibrium solution game numerical experiments provided confirm effectiveness proposed scheme index grid shared energy storage auction theory stackelberg equilibrium incentive compatibility ntroduction nergy storage devices expected play significant role future smart grid due capabilities giving flexibility balance grid providing renewable energy improve electricity management distribution network reduce electricity cost opportunistic demand response improve efficient use energy distinct features make perfect candidate assist tushar yuen singapore university technology design sutd somapah road singapore email wayes tushar yuenchau chai state grid smart grid research institute beijing china email chaibozju huang ministry home affairs singapore email shisheng smith national ict australia nicta act australia adjunct australian national university email poor school engineering applied science princeton university princeton usa email poor yang state key laboratory industrial control technology zhejiang university hangzhou china email yangzy work supported part singapore university technology design sutd energy innovation research program eirp singapore idc grant part national science foundation grant smith work supported nicta funded australian government department communications australian research council residential demand response altering electricity demand due changes balance supply demand particularly residential community setting household equipped use devices significantly leverage efficient flows energy within community terms reducing cost decarbonization electricity grid enabling effective demand response however energy storage requires space particular large consumers like shared facility controllers sfcs large apartment buildings energy requirements high consequently necessitates actual installment large energy storage capacity investment cost storage substantial whereas due random usage facilities depending usage pattern different residents storage may remain unused furthermore use ess rus limited two reasons firstly installation cost devices high costs entirely borne users secondly ess mainly used save electricity costs rus rather offer support local energy authorities makes use economically unattractive hence need solutions capture problems related space cost constraints storage sfcs benefit rus supporting third parties end numerous recent studies focused energy management systems devices see next section however studies overlook potential benefits local energy authorities sfcs attain jointly sharing devices belonging rus particularly due recent cost reduction devices sharing devices installed rus sfcs potential benefit sfcs rus community see later context propose scheme enables joint ownership smart grid sharing leases sfcs fraction device use charges discharges rest capacity purposes contrary sfc exclusively uses portion devices leased rus work motivated authors discussed idea joint ownership devices domestic customers local network operators demonstrated potential benefits obtained sharing however policy developed determine fraction battery capacity shared network operators domestic users decided note owner device decide ieee trans smart grid tate whether take part joint ownership scheme sfcs fraction shared sfcs hence need solutions capture decision making process rus interacting sfcs network context propose joint ownership scheme participating storage sharing sfcs rus sfcs benefit economically due interactive nature problem motivated use auction theory study problem exploiting communications aspects auction mechanisms exchange information users electricity providers meet users demands lower cost thus contribute economic environmental benefits smart particular modify vickrey auction technique integrating stackelberg game auctioneer rus show modified scheme leads desirable joint ownership solution rus sfcs modify auction price derived vickrey auction benefit owner adaptation adopted game well keep cost savings sfcs maximum study attributes technique show proposed auction scheme possesses incentive compatibility individual rationality properties leveraged unique equilibrium solution game propose algorithm stackelberg game executed distributedly rus auctioneer algorithm shown guaranteed reach desired solution also discuss proposed scheme extended time varying case finally provide numerical examples show effectiveness proposed scheme importance necessity proposed study respect actual operation smart grid lies assisting sfcs large apartment buildings smart communities reduce space requirements investment costs large energy storage units furthermore participating storage sharing sfcs rus benefit economically consequently influence efficiently schedule appliances thus reduce excess use electricity stress energy management schemes new smart grid paradigm discussed however scheme discussed paper differs existing approaches terms considered system model chosen methodology analysis use set rules reach desired solution remainder paper organized follows provide comprehensive literature review related work section followed considered system model section iii proposed modified mechanism demonstrated section also discuss scheme adopted time varying environment numerical case studies discussed section finally draw concluding remarks section recent years extensive research effort understand potential devices residential energy management mainly due capabilities reducing intermittency renewable energy generation well lowering cost electricity related studies divided two general categories first category studies consisting assume ess installed within premises used solely owners order perform different energy management tasks optimal placement sizing control charging discharging storage devices second type studies deal devices installed within rus located different location electric vehicles evs ess evs used provide ancillary services rus local energy providers furthermore another important impact devices residential distribution grids studied particular studies focus use devices bring benefits stakeholders external energy markets authors propose optimization method siting sizing ess distribution grid capture storage stakeholders distribution system operators furthermore optimal storage profiles different stakeholders distribution grid operators energy traders derived based case studies real data studies aspects smart grid found seen discussion use devices smart grid limited address intermittency renewable generation assisting users take part energy management reduce cost electricity also extends assisting grid similar energy entities sfc generating revenues stakeholders however one similarity mentioned literature one entity owns uses according requirements nonetheless might always case large number community regard considering potential benefits sharing discussed paper investigates case sfcs smart community allowed share fraction ess owned rus third party auctioneer community representative proposed modified auction scheme differs existing techniques energy management number ways particularly contrast studies proposed auction scheme captures interaction sfcs rus whereby decision auction price determined via stackelberg game exploiting auction rules including determination rule payment rule allocation rule interaction sfcs rus greatly simplified instance determination rule easily identify number rus participating auction process leverage please note technique applied real distribution network electric vehicle charging stations using information power flow infrastructure smart grids may participate single entity group rus connected via aggregator ieee trans smart grid sicap either due fact sfcs ess ess sfcs large enough store excess energy time important note requirement sfcs stem type intermittent generation profile sfcs rus adopt example one consider proposed scheme based hybrid generation profile comprising solar wind generation however proposed technique equally suitable types intermittent generation well assume rus set rus system willing share parts sfcs network battery cap capacity wants put fraction market share sfcs scap sicap total capacity device amount sell maximum space might sell sfcs sicap sharing price decides yes leaves sharing market yes decides take part sharing sharing price maximum amount battery space share sfcs tradeoff sharing attractive amount want share rather uses needs run essential loads future electricity disruption within price electricity high fig fraction capacity willing share sfcs community determination auction price via stackelberg game payment rule furthermore one hand work complements existing works focusing potential energy management smart grid hand proposed work potential open new research opportunities terms control energy dispatch size exploring interactive techniques cooperative games optimization sharing end offer space one hand decides reservation price per unit energy hereinafter use space energy interchangeably refer space might share sfcs however price received sharing lower removes space market expected benefit joint sharing economically attractive hand sfc needs share space rus store energy decides reservation bid represents maximum unit price sfc willing pay sharing per unit rus smart community enter sharing market sfc removes commitment joint ownership rus market due reason mentioned graphical representation concept sharing decision making process sharing space sfcs shown fig please note keep formulation simple include specific storage model scheme however suitably modeling related parameters storage capacity scap parameters like proposed scheme adopted specific devices iii ystem odel let consider smart community consists large number rus individual home single unit large apartment complex large number units connected via aggregator acts single entity equipped device use store electricity main grid renewable energy sources perform management according price offered grid device storage device installed within premises used electric vehicles entire community considered divided number blocks block consists number rus sfc sfc set sfcs responsible controlling electrical equipment machines lifts parking lot lights gates water pumps lights corridor area particular block community shared used residents block regular basis sfc assumed renewable energy generation also connected main electricity grid appropriate communication protocols considering fact nature energy generation consumption highly sporadic let assume sfcs community need extra ess store electricity meeting demand respected shared facilities particular time day interaction arises choice sharing price sfcs rus well need sfcs share space store energy profits rus reap allowing ess shared give rise market sharing rus sfcs smart grid market involved rus sfcs interact decide many take part sharing ess also agree sharing parameters trading price amount space ieee trans smart grid auction based ownership vickrey auction type auction scheme bidders submit written bids auctioneer without knowing bids others participating auction highest bidder wins auction pays second highest bid price nevertheless paper modify classical vickrey auction model joint ownership scheme smart community consisting multiple customers sfcs multiple owners devices rus modification motivated following factors unlike classical vickrey auction modified scheme would enable multiple owners customers decide simultaneously independently whether take part joint sharing determination rule proposed auction process see shortly modification auction provides participating flexibility choosing amount space may want share sfcs cases auction lower expected reservation price finally proposed auction scheme provides solutions satisfy incentive compatibility individual rationality properties see later desirable mechanism adopts auction theory end proposed auction process shown fig consists three elements fig energy management smart community auction process consisting multiple rus devices auctioneer number sfcs shared considered model rus decide reservation prices also amount space willing share sfcs amount determined economic benefits expects obtain giving sfcs joint ownership device associated reluctance sharing reluctance share ess may arise rus due many factors instance sharing would enable frequent charging discharging ess reduce device hence may set higher increase reluctance participate sharing however interested earning revenue rather increasing life time reduce thus get net benefits sharing storage therefore given set bids storage requirement sfcs maximum amount decide put sharing strongly affected trading price reluctance sharing process context develop auction based joint ownership scheme next section understand proposed scheme involves different types users auctioneers sfcs rus therefore communication protocol used could asynchronous however study assume communication different entities system synchronous mainly due fact assume algorithm executed considered time slot duration time slot one hour therefore synchronization significant issue considered case communication complexity affordable example auctioneer wait five minutes receives data sfcs rus algorithm proposed section executed owner rus set devices expect earn economic benefits maximizing utility function letting sfcs share fraction spaces customer sfcs set need ess order store excess electricity particular time day sfcs offer rus price view jointly fraction devices auctioneer third party estate building manager controls auction process owners customers according predefined rules proposed auction policies consist determination rule payment rule storage allocation rule determination rule allows auctioneer determine maximum limit auction price pmax number sfcs rus actively take part sharing scheme auction process initiated payment rule enables auctioneer decide price customer needs pay owners sharing devices allows rus decide much storage space putting market share sfcs finally auctioneer allocates spaces sharing sfc following allocation rule proposed auction important note although customers owners access others private information amount shared required energy space sfc rules auction known participants joint ownership process please note life time degradation due charging discharging may true electromechanical systems system reluctance parameter refers opposite preference parameter hereinafter used refer auction price instead sharing trading price ieee trans smart grid sfcs rus network cosumers participating auction owner customer hence joint ownership would detrimental choice rus sfcs within set respectively consequently remove proposed auction process one desirable property auction mechanism participating agents auction mechanism cheat payment allocation rules established end propose determined sfcs rus engaged joint sharing process necessary condition matching total demand supply maintaining truthful auction scheme nevertheless truthful auction necessity sfc also allowed participate joint ownership auction price maximum auction price pmax vickrey price pmin owners participating auction storage amount fig determination vickrey price maximum auction price number participating rus sfcs auction process proposed scheme initially determines set sfcs rus effectively take part auction mechanism upper bound auction price pmax determined eventually payment allocation rules executed course auction plan payment rule note intersection demand supply curves demonstrates highest reservation price pmax participating rus according vickrey auction mechanism auction price sharing devices would second highest reservation price vickrey price indicated pmin hereinafter however note second highest price might considerably beneficial participating rus auction scheme contrast set pmax price could detrimental sfcs therefore make auction scheme attractive beneficial participating rus time cost effective sfcs strike balance pmax pmin propose scheme deciding auction price amount ess rus put market sharing according particular propose stackelberg game auctioneer decides auction price maximize average cost savings sfcs well satisfying desirable needs ess rus decide vector amount would like put market sharing benefits maximized please note solution proposed problem formulation also solved following distributed algorithms algorithms designed via optimization technique stackelberg game stackelberg game decision making process leader game takes first step choose strategy followers hand choose strategy response decision made leader proposed game assume auctioneer leader rus followers hence seen stackelberg game slmfsg propose auctioneer leader slmfsg take first step choose suitable min auction price range pmin meanwhile follower game play best strategy choosing suitable response price offered auctioneer best determination rule determination rule proposed scheme executed following steps inspired rus set owners ess declare reservation price increasing order consider without loss generality rus submit reservation price along amount interested share sfcs auctioneer sfcs bidding prices arranged decreasing order sfcs submit auctioneer along quantity require iii auctioneer receives ordered information rus sfcs generates aggregated supply reservation price rus versus amount rus interested share demand curves reservation bids verses quantity needed using respectively auctioneer determines number participating sfcs rus satisfies intersection two curves using standard numerical method soon sfc determined intersection point shown fig important aspect auction mechanism determine number sfcs rus take part joint ownership ess note number sfcs rus determined following relationship holds rest ieee trans smart grid response strategy stem utility function captures benefit gain deciding amount shared offered price whereas auctioneer chooses price view maximize average cost savings sfcs network capture interaction auctioneer rus formally define slmfsg auctioneer consists set rus participating auction scheme auctioneer utility reaps choosing suitable strategy response price announce auctioneer iii strategy set average cost savings incurred sfc strategy chosen max auctioneer strategy pmin auctioneer proposed approach iteratively responses strategy chosen auctioneer independent rus set response affected offered price reluctance parameter initial reservation price however note auctioneer control decision making process rus sets auction price view maximize cost savings respect cost initial bidding price sfcs end target auctioneer assumed maximize average cost savings choosing appropriate price offer min max average savthe range ing auction price sfcs pay rus sharing ess total amount sfcs share rus note cost savings lower however conflicted fact lower may lead choice lower rus turn affect cost sfcs hence reach desirable solution set auctioneer rus continue interact game reaches stackelberg equilibrium utility function defines benefits attain sharing amount sfcs proposed reluctant parameter reservation price set mainly consists two parts first part utility terms revenue obtains sharing portion device second part hand negative impact terms liability stemming sharing sfc mainly due fact decides share amount storage space sfc use scap amount storage use term captures restriction usage reluctance parameter introduced design parameter measure degree unwillingness take part energy sharing particular higher value refers case reluctant take part sharing thus seen even sharing attains lower net benefit thus seen net benefit sharing utility function based assumption marginal utility suitable modeling benefits power consumers explained addition proposed utility function also possesses following properties utility increases amount price paid sharing per unit increases reluctance parameter increases becomes reluctant share consequently utility decreases iii particular price shares sfcs less interested becomes share joint ownership end particular price reluctance parameter objective max definition let consider game described utility average utility per sfc described via respectively reach solution game satisfies following set conditions max pmin hence according rus sfcs achieve best possible outcomes hence neither rus auctioneer incentive change strategies soon game reaches however achieving equilibrium solution pure strategies always guaranteed games therefore need investigate whether proposed possesses theorem always exists unique solution proposed slmfsg auctioneer participating rus set proof firstly note strategy set max eer continuous within range pmin hence always strategy auctioneer enable rus offer part ieee trans smart grid within limits sfcs secondly price utility function strictly concave respect hence min max price unique chosen bounded range maximize therefore evident soon scheme find unique average utility per sfc attains maximum value slmfsg consequently reach unique end first note amount achieves maximum utility response price obtained algorithm algorithm slmfsg reach initialization pmin auction price pmin pmax adjusts amount share according arg replacing value simple arithmetics auction price maximizes average cost savings sfcs found max end auctioneer computes average cost savings sfcs auctioneer record desirable price maximum average cost savings end end achieved guaranteed reach proposed slmfsg exclusive therefore unique thus theorem proved proof proposed algorithm note choice strategies rus emanate choice auctioneer shown always attain nonempty single value due bounded strategy set max pmin hand algorithm designed response choose strategy bounded range order maximize utility function end due bounded strategy set continuity respect confirmed always reach fixed point given therefore proposed algorithm always guaranteed reach unique slmfsg algorithm payment attain auctioneer information needs communicate considered auctioneer knowledge private information rus regard order decide suitable auction price beneficial rus sfcs auctioneer rus interact one another capture interaction design iterative algorithm implemented auctioneer rus distributed fashion reach unique proposed slmfsg algorithm initiates auctioneer sets auction price pmin optimal average cost saving per sfc iteration information offered auction price auctioneer plays best response submits choice auctioneer auctioneer hand receives information participating rus determines average cost savings per sfc knowledge reservation bids using auctioneer compares auctioneer updates optimal auction price one recently offered sends new choice price rus next iteration however auctioneer keeps price offers another new price rus next iteration iteration process continues conditions satisfied hence slmfsg reaches show process proposed algorithm algorithm allocation rule amount decides put market sharing response auction price determined auctioneer allocates quantity jointly shared sfcs according following rule max allotment excess must endure essentially rule emphasizes requirements sfcs exceed available space rus allow sfcs share put market however available exceeds total demand sfcs share peach fraction oversupply nonethless burden distributed different ways among participating rus instance burden distributed either proportionally amount shared sfcs proportionally theorem algorithm proposed algorithm always ieee trans smart grid reservation alternatively total burden also shared equally rus auction scheme proportional allocation proportional allocation fraction total burden allocated reservation price proportion implemented follows clear participants proposed auction scheme individually rational leads following corollary corollary proposed auction technique possesses individual rationality property rational owners rational customers actively participate mechanism gain higher utility theorem proposed auction mechanism incentive compatible truthful auction best strategy sfc replacing burden allocation determined proportion shared equal allocation according equal allocation bears equal burden proof validate theorem first note choice strategies rus always guaranteed converge unique proven theorem theorem confirms stability selections according owners auction process rus proposed case decide stable amount commodity supply share customers auction process always converges auction allocation commodity conducted according rules described therefore neither sfc intention falsify allocation adopt sharing storage space rus amount therefore auction process incentive compatible thus theorem proved oversupply important note although proportional allocation allows distribution oversupply according properties rus equal allocation suitable make auction scheme strategy proof strategy proofness important designing auction mechanisms encourages participating players lie private information reservation price essential acceptability sustainability mechanisms energy markets therefore use equal allocation rest paper adaptation case extend proposed scheme case assume sharing scheme works fashion time slot suitable time duration based type application hour considered time slot rus sfcs take part proposed sharing scheme decide parameters auction price amount ess needs shared however case amount shares time slot may affected burden needed bear previous time slot end first note number participating rus sfcs decided particular time slot via determination rule rest procedures payment allocation rules executed following descriptions section respectively respective time slot total number rus sfcs fixed rus sfcs participate modified auction scheme time slot determined respective reservation bidding prices time slot proposed auction process may evolve across different time slots based change amount participating may want share change total amount required sfcs different time slots discussing proposed modified auction scheme extended first define properties auction process note auction process executed always possibility owners might cheat amount storage wanted put market auction context need investigate whether proposed scheme beneficial enough individually rational rus motivated cheat incentive compatible auction executed individual rationality property first note players rus auctioneer behalf sfcs take part slmfsg maximize benefits terms respected utility choice strategies choice rus determine vector benefitted maximum hand strategy auctioneer choose price maximize savings sfcs accordingly rus auctioneer reach point game neither owners customers benefitted choosing another strategy slmfsg reaches end already proven theorem proposed auction process must possesses unique therefore subsequent outcome theorem certain loads lifts water pumps large apartment buildings easy schedule shared different users buildings hence focus time variation storage sharing process rus considered system please note reservation price indicates much wants paid forpsharing sfcs thus affects determination total total burden ieee trans smart grid households equipped dedicated battery sell stored electricity grid nonetheless also affected amount burden needed bear due oversupply spaces previous time slot end amount space offer sfcs defined max otherwise following parameters index time slot total number time slot reservation price time slot reservation price vector fraction space wants shares sfcs time slot vector space shared sfc total considered times maximum available sharing time slot bidding price sfc time slot reservation price vector sfc required space sfc time slot auction price time slot benefit achieves time slot average cost saving per sfc time slot burden shared participating time slot number participating sfcs modified auction scheme time slot number participating rus modified auction scheme time slot end utility function average cost savings per sfc time slot defined sfc hand decides amount needs share rus based random requirement shared facilities available shared space time slot random generation renewable energy sources appropriate hence renewables facility requirement assume fraction shared available previous time slot negligible requirement assumed random time slot considering random nature renewable generation energy requirement shared facilities note assumption particularly valid sfc uses shared ess previous time slot meeting demand shared facilities use considered time slot nonetheless please note assumption imply relationship auction process across different time slots auction process one time slot still depends time slots due dependency via end modeled proposed modified auction scheme studied section adopted time slot view maximize important note reservation price vector bidding price vector sfc modeled existing pricing schemes price constitute solutions proposed modified auction scheme condition comprises solution vector spaces shared participating rus time slot auction price vector auction rules adopted time slot proposed case similar rules discussed section hence solution proposed modified auction scheme timevarying environment also possesses incentive compatibility individual rationality properties time slot pkt time slot determination rule proposed scheme determines number participating rus sfcs based reservation bidding prices time slot number participation also motivated available space requirement sfc however unlike static case environment offered space time slot influenced contribution auction process previous time slot instance receives burden time slot willingness share space time slot may reduce also affected maximum amount available simplicity assume change different time slots therefore offer share amount space sfcs time slot share amount time slot analogous example arrangement found fit scheme device ase tudy numerical case studies consider number rus different blocks smart community interested allowing sfcs community jointly share devices stress large number sfcs system reservation bidding prices vary significantly one another therefore difficult find intersection point determine highest please note time slot related similar manner related static case however unlike static case execution auction process time slot affected value parameters particular time slot shared ieee trans smart grid table change average utility achieved sfc network according algorithm due change reluctance sharing one kwh sfc reluctance parameter number iteration average utility per net benefit average utility sfc average cost savings average utility sfc put market sharing seen figure one hand reach much quicker hand interest sharing observed due fact interaction auctioneer rus continues auction price updated iteration regard auction price becomes larger reservation price put reserve market intention shared sfcs due reason put ess market much sooner iteration higher reservation prices whose interest sharing reaches auction price encouraging enough share ess iterations unfortunately utilities convenient enough take part auction process therefore shared fractions note demonstration convergence slmfsg unique subsequently demonstrates proofs theorem theorem theorem corollary strongly related explained previous section would like investigate reluctance parameters rus may affect average utility algorithm thus affecting decisions share end first determine average utility experienced sfc reluctance parameter considering outcome benchmark show effect different reluctance parameters achieved average benefits sfc table demonstration property necessary order better understand working principle designed technique sharing according table reluctance increases becomes uncomfortable lower utility put market jointly owned sfcs consequence also affects average utility achieved sfc shown table reduction average utilities per respectively compared average utility achieved every ten times reduction reluctance parameter similar settings reduction average utility sfcs respectively therefore proposed scheme enable rus put storage auction market related reluctance sharing small note although current investment cost batteries high compared relative short life times expected battery costs near future become popular addressing number iteration fig convergence algorithm average utility per sfc reaches maximum wants put market share reaches steady state level maximize benefits reservation price pmax according determination rule paper limit ourself around rus however rus fact cover large community aggregation discussed assumed group households household equipped battery capacity hour kwh reluctance parameter rus assumed similar taken range important note considered design parameter proposed scheme used map reluctance share sfcs reluctance sharing affected parameters like capacity condition environment applicable requirement considering different system parameters proposed scheme capture two extremes reluctant highly reluctant required electricity storage sfc assumed within range kwh nevertheless required sharing could different usage pattern users changes since type ess associated cost used different rus vary significantly choices reservation price share ess sfcs vary considerably well context consider reservation price set sfc taken range important note chosen parameter values particular study may vary according availability number rus requirements sfcs trading policy time country first show convergence algorithm slmfg fig case study assume five sfcs smart grid community taking part auction process eight rus fig first note proposed slmfg reaches interations average cost savings per sfc reaches maximum hence convergence speed seconds reasonable nonetheless interesting property observed examine choice ieee trans smart grid sfcs would put higher burden rus carry consequence relative utility auction lower nevertheless requirement sfcs higher sharing brings significant benefits rus seen fig hand higher reluctance rus tend share lower amount enables endure lower burden case lower demands sfcs consequently enhances achieved utility nonetheless requirement higher sfcs utility reduces subsequently compared rus lower reluctance parameters thus observing effects different average utility per fig understand total required smaller rus higher reluctance benefit vice versa illustrates fact even rus high unwillingness share ess beneficial sfcs system required ess small however higher requirement sfcs would benefit rus lower reluctances interested sharing achieve higher average utilities discuss computational complexity proposed scheme greatly reduced determination rule modified auction scheme rule determines actual number participating rus sfcs auction also note determining number participating sfcs rus auctioneer iteratively interacts rus sets auction price view increase average savings sfc therefore main computational complexity modified auction scheme stems interactions auctioneer participating rus decide auction price context computational complexity problem falls within category single leader multiple follower stackelberg game whose computational complexity approximated increase linearly number followers shown reasonable numerous studies hence computational complexity feasible adopting proposed scheme insight properties proposed auction scheme demonstrate technique benefit rus smart network compared existing allocation schemes equal distribution fit schemes essentially allocation scheme allows sfcs meet total storage requirements sharing total requirement equally participating rus assume shared amount exceeds total amount reservation storage puts market share full reservation amount fit popular scheme energy trading consumers grid assume prefers sell storage amount energy grid fit price rate instead sharing fraction storage sfc end resulting average utilities achieve sharing space sfcs adopting proposed fit schemes shown table table first note amount required sfcs increases average utility achieved per willing share average utility achieved rus less willing share supply demand supply demand required battery space sfcs kwh fig effect change required amount sfcs achieved average utility per intermittency renewables foreseen near future proposed scheme applicable gain benefit storage sharing thus motivate rus keep small according observation table said reluctance parameters rus change either different days different time slots performance system terms average utility per average cost savings per sfc change accordingly given system parameters participating rus put amount auction market distributed according allocation rule described regard investigate average utility altered total storage amount required sfcs changes network particular case considered total requirement sfcs assumed general shown fig average utility initially increases increase required sfcs eventually becomes saturated stable value due fact required amount increases share reserved put market sfcs determined auction price slmfsg hence utility increases however particular fixed amount puts market share consequently shared amount reaches maximum even increase requirement sfcs share therefore utility becomes stable without increment interestingly proposed scheme seen fig favors rus higher reluctance requirement sfcs relatively lower favors rus lower reluctance higher demands due way designed proposed allocation scheme dictated burden allocation note according lower put higher amount market share however total required amount lower ieee trans smart grid table comparison change average utility per smart grid system required total amount energy storage required sfcs varies required space sfcs average utility net benefit equal distribution scheme average utility net benefit fit scheme average utility net benefit proposed scheme percentage improvement compared scheme percentage improvement compared fit scheme shared kwh available share kwh also increases cases reason increment explained fig also studied cases proposed scheme shows considerable performance improvement compared fit schemes interesting trend performance improvement observed compare performance proposed scheme fit performances requirements particular performance proposed scheme higher requirement increases however improvement relatively less significant requirement switches change performance explained follows proposed scheme seen fig amount shared participating influenced reluctance parameters hence even demand sfcs could larger rus may choose share spaces reluctance limited regard rus current case study increase share requirement sfcs increases turn produces higher revenue rus furthermore rus choice ess reach saturation increase demand case affect share consequence performance improvement noticeable previous four cases nonetheless considered cases auction process performs superior scheme average performance improvement clearly shows value proposed methodology adopt joint sharing smart grid performance improvement respect fit scheme average due difference determined auction price price per unit energy fit scheme finally show decision making process system affected decision previous time slot total storage requirement sfcs total number time slots considered show performance analysis four context assume five rus system kwh respectively share sfcs total requirements sfcs considered four time slot please note numbers considered case study may different values different scenarios fig show available rus begining time slot much going share modified auction scheme adopted time slot simple analysis assume shares total available share remaining time slots number time slot number time slot fig demonstration proposed modified auction scheme extended time varying system reservation amount varies rus varies different time slots based sharing amount previous time slot total required storage sfcs chosen randomly due reasons explained section reservation prices considered change one time next based predefined time use price scheme seen fig time slot share available ess sfc whereby rus share ess due reasons explained fig since total requirement therefore neither needs carry burden time slot shares ess meet requirement sfc requirement lower supply needs carry burden kwh similarly time slot take part energy auction scheme enough share sfc however share time slot stems burden oversupply time slot scheme shown time slot available rus already shared sfcs end time slot thus proposed modified auction scheme successfully capture time variation scheme modified given section onclusion paper modeled modified auction based joint energy storage ownership scheme number residential units rus shared facility controllers sfcs smart grid designed system discussed determination payment allocation rule auction payment rule scheme facilitated stackelberg game slmfsg ieee trans smart grid auctioneer rus properties auction scheme slmfsg studied shown proposed auction possesses individual rationality incentive compatibility properties leveraged unique stackeberg equilibrium slmfsg proposed algorithm slmfsg shown guaranteed reach also facilitates auctioneer rus decide auction price well amount put market joint ownership compelling extension proposed scheme would study feasibility scheduling loads lifts water machines shared space another interesting research direction would determine large number sfcs rus different reservation bidding prices take part modified auction scheme one potential way look problem cooperative sfcs rus may cooperate decide amount reservation bidding price would like put market participate auction benefit sharing another important yet interesting extension work would investigate quantify reluctance participate sharing quantification reluctance convenience also enable practical deployment many energy management schemes already described literature tushar chai yuen smith wood yang poor energy management distributed energy resources smart grid ieee trans ind vol apr klemperer auction theory guide literature journal economic surveys vol july deng song han incentive mechanism demand side management smart grid using auction ieee trans smart grid vol may vickrey counterspeculation auctions competitive sealed tenders journal finance vol mar chai chen yang zhang demand response management multiple utility companies game approach ieee trans smart grid vol march zhu zhang gjessing dependable demand response management smart grid stackelberg game approach ieee trans smart grid vol march siano demand response smart survey elsevier renewable sustainable energy reviews vol feb denholm ela kirby milligan role energy storage renewable electricity generation national renewable energy laboratory nrel colorado usa technical report jan cao jiang zhang reducing electricity cost smart appliances via energy buffering framework smart grid ieee trans parallel distrib vol sep sechilariu wang locment building integrated photovoltaic system energy storage smart grid communication ieee trans ind vol april carpinelli celli mocci mottola pilo proto optimal integration distributed energy storage devices smart grids ieee trans smart grid vol june kim ren van der schaar lee bidirectional energy trading residential load scheduling electric vehicles smart grid ieee sel areas vol july roy leemput geth salenbien buscher driesen apartment building electricity system impact operational electric vehicle charging strategies ieee trans sustain energy vol jan ding zhong liu xie phev charging discharging cooperation networks coalition game approach ieee internet things vol dec lin leung optimal scheduling regulation service ieee internet things vol dec tan wang integration hybrid electric vehicles residential distribution grid based intelligent optimization ieee trans smart grid vol july igualada corchero heredia optimal energy management residential microgrid including system ieee trans smart grid vol july geth tant haesen driesen belmans integration energy storage distribution grids ieee power energy society general meeting minneapolis july nykamp bosman molderink hurink smit value storage distribution grids competition cooperation stakeholders ieee trans smart grid vol sep tushar yuen smith poor price discrimination energy trading smart grid game theoretic approach ieee trans smart grid appear yuen hassan tushar wen wood liu demand response management residential smart grid theory practice ieee section smart grids hub interdisciplinary research vol naeem shabbir hassan yuen ahmed tushar understanding customer behavior demand response management program ieee section smart grids hub interdisciplinary research vol liu yuen zhang xie energy consumption management heterogeneous residential demands smart grid ieee trans smart grid vol june doi eferences silvestre graditi sanseverino generalized framework optimal sizing distributed energy resources microgrids using swarm approach ieee trans ind vol feb llorens jurado control hybrid system integrating renewable energies hydrogen batteries ieee trans ind vol may fang misra xue yang smart grid new improved power grid survey ieee commun surveys vol oct liu yuen huang hassan wang xie ratio constrained management consumer preference residential smart grid ieee sel topics signal vol jun liu yuen hassan huang xie electricity cost minimization microgrid distributed energy resources different information availability ieee trans ind vol apr hassan khalid yuen tushar customer engagement plans peak load reduction residential smart grids ieee trans smart grid vol hassan khalid yuen huang pasha wood kerk framework minimum user participation rate determination achieve specific demand response management objectives residential smart grids elsevier international journal electrical power energy systems vol tushar yuen huang smith poor cost minimization charging stations photovoltaics approach classification ieee trans intell transp vol doi huang tushar yuen otto quantifying economic benefits ancillary electricity market smart appliances singapore households elsevier sustainable energy grids networks vol mar wang bale sun active demand response using shared energy storage household energy management ieee trans smart grid vol dec ieee trans smart grid wang yuen chen hassan ouyang demand scheduling delay tolerant applications elsevier journal networks computer applications vol july zhang chen data gathering optimization dynamic sensing routing rechargeable sensor network trans vol june doi zhang cheng shi chen optimal dos attack scheduling wireless networked control system ieee trans control syst vol doi wang wang grid power peak shaving valley filling using systems ieee trans power vol july tushar saad poor smith economics electric vehicle charging game theoretic approach ieee trans smart grid vol dec gkatzikis koutsopoulos salonidis role aggregators smart grid demand response markets ieee sel areas vol july tushar zhang smith poor prioritizing consumers smart grid game theoretic approach ieee trans smart grid vol may saad han poor noncooperative game double energy trading phevs distribution grids proc ieee int conf smart grid commun smartgridcomm brussels belgium bradley frank design demonstrations sustainability impact assessments hybrid electric vehicles renewable sustainable energy reviews vol jan derin ferrante scheduling energy consumption local renewable dynamic electricity prices proc workshop green smart embedded syst technol methods tools stockholm sweden apr huang sycara design double auction computational intelligence vol feb oduguwa roy optimisation using genetic algorithm proc ieee international conference artificial intelligence systems geelong australia feb samadi schober wong jatskevich optimal pricing algorithm based utility maximization smart grid proc ieee int conf smart grid commun smartgridcomm gaithersburg guojun yongsheng xiaoqin xicong qianggang niancheng study proportional allocation electric vehicles conventional fast charge methods distribution network proc china international conference electricity distribution ciced shanghai china sept tsikalakis zoulias caralis panteri carvalho tariffs promotion energy storage technologies energy policy vol mar breakthrough electricity storage new large powerful redox flow battery science daily march retrieved august online available ali advancing may published magazine online available http garun reservations tesla powerwall already sold accessed may online available http lipa proposal concerning modifications lipa tariff electric service accessed april online available http | 3 |
jan direct sum decomposability polynomials factorization associated forms maksym fedorchuk abstract homogeneous polynomial discriminant interpret direct sum decomposability polynomial terms factorization properties macaulay inverse system milnor algebra leads criterion direct sum decomposability polynomial algorithm computing direct sum decompositions either characteristic large positive characteristic polynomial factorization algorithms exist also give simple necessary criteria direct sum decomposability arbitrary homogeneous polynomials arbitrary apply prove many interesting classes homogeneous polynomials direct sums introduction homogeneous polynomial called direct sum linear change variables written sum two polynomials disjoint sets variables homogeneous polynomial isolated hypersurface singularity geometric decomposition stems classical theorem describes monodromy operator singularity tensor product monodromy operators singularities direct sums also subject symmetric strassen additivity conjecture postulating waring rank sum waring ranks see example paper give new criterion recognizing smooth direct sum either characteristic large positive characteristic problem criterion arbitrary smooth singular form successfully addressed earlier kleppe arbitrary algebraically closed works interpret direct sum decomposability refer homogeneous polynomial form call form variables smooth smooth hypersurface see terminology notational conventions maksym fedorchuk form terms apolar ideal see details particular algebraically closed gives criterion recognizing direct sum terms graded betti numbers however none works seem give method computing direct sum decomposition exists criterion used see example although criterion works smooth forms arbitrary either characteristic large characteristic leads algorithm direct sum decompositions polynomial factorization algorithms exist algorithm given section recall smooth form degree variables one assign degree form dual variables called associated form associated form macaulay inverse system milnor algebra simply means apolar ideal coincides jacobian ideal leads observation smooth form written sum two forms disjoint sets variables associated form decomposes product two forms disjoint sets dual variables lemma example scalar main purpose paper prove converse statement thus establish criterion direct sum decomposability smooth form terms factorization properties associated form see theorem lemma give simple necessary condition valid arbitrary direct sum decomposability arbitrary form terms gradient point applied section prove wide class homogeneous forms contains direct sums theorem show simple necessary condition fact form git stable algebraically closed notation conventions let span subset space denoted representation multiplicative group every denote action weight let vector space dimk set sym sym homogeneous elements called forms action also known apolar pairing decomposability polynomials associated forms namely basis dual basis pairing given given homogeneous nonzero apolar ideal space essential variables char char pairing perfect every symd furthermore graded gorenstein artin local ring socle degree theorem macaulay establishes bijection graded gorenstein artin quotients socle degree elements see lemma exercise let basis gradient point jacobian ideal milnor algebra remark even though allow positive characteristic take divided power algebra appendix reader might anticipated reason several places avoid impose condition char large enough zero case divided power algebra isomorphic needed degree homogeneous ideal denote closed subscheme say form smooth hypersurface smooth course equivalent algebraic closure locus smooth forms denoted direct sums products recall called direct sum form type direct sum decomposition nonzero words direct sum choice basis maksym fedorchuk recall also called degenerate exists analogy direct sums call nonzero form direct product direct sum decomposition sym sym words nonzero homogeneous sym direct product choice basis furthermore call direct product decomposition balanced deg deg note factorization direct product decomposition remark note roles interchangeable apolar ideal space essential variables notation char char direct sum write furthermore char char dual dimk dimk say grass symd balanced direct sum direct sum decomposition elements grass dimk symd grass dimk symd symd symd symd associated forms recall theory associated forms developed let grass res open subset grass parameterizing linear spaces form regular sequence note char char smooth grass res every grass res ideal complete intersection ideal graded gorenstein artin local ring socle degree suppose char char macaulay theorem exists unique scaling form form called associated form alper isaev systematically studied section particular although given proof applies whenever char char decomposability polynomials associated forms assignment gives rise associated form morphism grass res pdn smooth form set following eastwood isaev call associated form property means macaulay inverse system milnor algebra summarizing char char max following commutative diagram morphisms grass res remark alper isaev associated form element achieve choosing canonical generator socle given jacobian determinant purposes consider scalar main results theorem let suppose either char char max let smooth form following equivalent direct sum balanced direct sum balanced direct product direct product admits defined admits defined moreover basis factors decomposes dual basis maksym fedorchuk recall form decomposition called maximally fine direct sum decomposition direct sum nondegenerate forms degree kleppe established maximally direct sum decomposition unique theorem use theorem give alternate proof result smooth forms deducing fact polynomial ring ufd proposition let suppose either char char let smooth form unique maximally fine direct sum decomposition theorem let suppose algebraically closed field char following equivalent git stable direct sum morphism grass positive fiber dimension strictly semistable dimk consequently locus direct sums closed stable locus prior works kleppe teitler prove form algebraically closed apolar ideal minimal generator degree either direct sum limit direct sums case contains element form degree forms variables respectively since form given equation visibly particular singular translates computable criterion recognizing whether smooth form direct sum algebraically closed kleppe uses quadratic part apolar ideal associative algebra dimension base milnor algebra proves arbitrary direct sum decompositions bijection complete sets orthogonal idempotents decomposability polynomials associated forms key step proof direct sum criterion jordan normal form decomposition certain linear operator general requires solving characteristic equation similarly complete set orthogonal idempotents requires solving system quadratic equations makes challenging turn algorithm direct sum decompositions exist case linear factor theorem proved proposition using criterion smith stong indecomposability gorenstein artin algebras connected sums proof linear factor case statement higher degree factors appear new corollaries generalizations corollary whose proof relies theorem saying apolar ideals generic determinant permanent generated degree approach independent results proofs decomposability criteria implications statements theorems easy observations main separated lemma others found recent papers remaining key ingredient completes main circle implications separated proposition lemma restrictions suppose direct sum following hold subgroup fixes following decompositions dimk grass balanced direct sum dimk family pairwise nonproportional forms proof obvious namely suppose basis subgroup acting weight weight clearly suppose dimk see dimk dimk thus balanced direct sum proves taking proves maksym fedorchuk proof theorem implications lemma next prove suppose decomposes balanced direct sum basis follows every using assumption char conclude direct sum basis equivalence proved proposition concludes proof equivalence three conditions next prove suppose direct product decomposition basis let dual basis suppose xdnn smallest respect graded reverse lexicographic order monomial degree lie since zndn must appear nonzero deg hand lemma follows deg symmetry also deg conclude inequalities must equalities balanced direct product decomposition alternatively consider diagonal action acts follows homogeneous respect action weight deg deg however relevant parts proof theorem show numerical criterion semistability forces deg deg turn last two conditions first morphism equivariant locally closed immersion stabilizer preserving proves equivalence implication follows proof theorem shows smooth gradient point direct sum note even though stated relevant parts proof theorem use lemma remains valid char char fact smooth form must satisfy numerical criterion stability decomposability polynomials associated forms proof theorem theorem every git stable gradient point polystable furthermore admits direct sum moreover proposition shows morphism git quotients grass injective proves every stable dimension equals dimension stabilizer concludes proof equivalences fact locus direct sums closed follows upper semicontinuity domain dimensions proposition let suppose field char char element grass symd res balanced direct sum balanced direct product moreover basis factors balanced direct product decomposes balanced direct sum dual basis proof forward implication easy observation consider balanced direct sum grass res nonzero scalar see lemma also follows fact level algebras suppose balanced direct product basis deg deg let dual basis let complete intersection ideal spanned elements evident apolar ideal also following observation maksym fedorchuk claim dimk dimk proof symmetry prove second statement since spanned length regular sequence degree forms dimk suppose strict inequality let generated degree least minimal generators degree follows top degree strictly less lemma using gives thus every monomial appears contradicts point apply prop conclude contains regular sequence length contains regular sequence length shows decomposes balanced direct sum basis however sake proceed give direct argument claim exists regular sequence regular sequence let grass res let ideal generated going prove conclude proof proposition since char char macaulay theorem applies prove need show ideals coincide degree prove decomposability polynomials associated forms since regular sequence similarly together gives set remains show end consider since working modulo assume similarly assume construction using conclude proof proposition proof proposition char case vacuous since smooth binary cubic direct sum cases char implies char max suppose two maximally direct sum decompositions pdn suppose shares irreducible factors one uniqueness factorization must factorization direct product must direct sum theorem contradicting maximality assumption therefore shares irreducible factor one symmetry shares irreducible factor one maksym fedorchuk follows reordering thus conclude using forces necessary conditions direct sum decomposability next two results give easily necessary conditions arbitrary form direct sum hold arbitrary restriction characteristic keep notation theorem suppose form let dimk factor dimk direct sum repeated factor direct sum corollary suppose form dimk linear factor direct sum proof theorem apply lemma suppose basis dimk dimk let subgroup acting weight weight decomposition since dimk dimk dimk follows dimension considerations nonzero multiple belongs one two thus homogeneous respect follows either forces either respectively contradiction suppose direct sum repeated factor let since nonzero multiple belongs obtain contradiction next result needs following definition given basis nonzero state set xdnn decomposability polynomials associated forms words state set monomials appearing nonzero set theorem suppose let suppose basis following conditions hold char xdnn graph vertices edges given connected direct sum remark words says two partials share common monomial says monomial whose nonzero partials appear partials must appear immediate corollary theorem show generic determinant permanent polynomials generic polynomials well polynomial state direct sums corollary ppolynomials direct sums let suppose direct sum corollary polynomials direct sums let suppose set direct sum proof corollaries easy see conditions orem proof theorem char set divides set monomials whose gradient point trivial maksym fedorchuk suppose direct sum note condition implies dimk lemma exists form since condition must since condition implies fact comparing second partials using condition conclude obtain contradiction finding balanced direct product decomposition algorithmically section show theorem reduces problem direct sum decomposition given smooth form polynomial factorization problem begin suppose given smooth form basis associated form computed dual basis form apolar apply theorem need determine decomposes balanced direct product basis following simple lemma explains notation lemma suppose char char max smooth associated form balanced direct product factorization equivalently moreover case decomposes balanced direct product basis dual basis compatible direct sum decomposition equation proof equivalence two conditions follows fact dual claim follows theorem observing factorization algorithm direct sum decompositions suppose either char char max exists polynomial factorization algorithm let step compute degree smooth stop otherwise continue decomposability polynomials associated forms step compute dual step compute irreducible factorization check existence balanced direct product factorizations using lemma exist direct sum otherwise direct sum step every balanced direct product factorization lemma gives basis decomposes direct sum algorithm implemented macaulay package written justin kim zihao fang source code available upon request follows give examples algorithm action remark jaroslaw pointed already step algorithm computationally highly expensive large however reasonably fast small example taking seconds example binary quartics suppose algebraically closed characteristic every smooth binary quartic standard form scalar associated form clearly singular values fact balanced direct product direct sum namely scalars note associated form balanced direct product hence direct sum theorem since apolar ideal example illustrates direct sum decomposability criterion fails example consider following element maksym fedorchuk associated form one checks balanced direct product factorization follows direct sum fact projectively equivalent acknowledgments author grateful jarod alper introduction subject alexander isaev numerous stimulating discussions inspired work zach teitler questions motivated results section author partially supported nsa young investigator grant alfred sloan research fellowship justin kim zihao fang wrote macaulay package computing associated forms supported boston college undergraduate research fellowship grant direction author references jarod alper alexander isaev associated forms classical invariant theory applications hypersurface singularities math jarod alper alexander isaev associated forms hypersurface singularities binary case reine angew appear doi weronika jaroslaw johannes kleppe zach teitler apolarity direct sum decomposability polynomials michigan math michael eastwood alexander isaev extracting invariants isolated hypersurface singularities moduli algebras math david eisenbud commutative algebra view toward algebraic geometry volume graduate texts mathematics new york maksym fedorchuk git semistability hilbert points milnor algebras math maksym fedorchuk alexander isaev stability associated forms preprint decomposability polynomials associated forms anthony iarrobino vassil kanev power sums gorenstein algebras determinantal loci volume lecture notes mathematics berlin johannes kleppe additive splittings homogeneous polynomials thesis marcos sebastiani thom sur monodromie invent sepideh masoumeh apolarity determinants permanents generic matrices commut algebra larry smith stong projective bundle ideals duality algebras pure appl algebra zach teitler conditions strassen additivity conjecture illinois fedorchuk department mathematics boston college commonwealth ave chestnut hill usa address | 0 |
graphvae towards generation small graphs using variational autoencoders martin simonovsky nikos komodakis feb abstract deep learning graphs become popular research topic many applications however past work concentrated learning graph embedding tasks contrast advances generative models images text possible transfer progress domain graphs propose sidestep hurdles associated linearization discrete structures decoder output probabilistic fullyconnected graph predefined maximum size directly method formulated variational autoencoder evaluate challenging task molecule generation introduction deep learning graphs recently become popular research topic bronstein useful applications across fields chemistry gilmer medicine ktena computer vision simonovsky komodakis past work concentrated learning graph embedding tasks far encoding input graph vector representation stark contrast advances generative models images text seen massive rise quality generated samples hence intriguing question one transfer progress domain graphs decoding vector representation moreover desire method mentioned past however learning generate graphs difficult problem methods based gradient optimization graphs discrete structures unlike sequence text generation graphs arbitrary connectivity clear best way linearize construction sequence steps hand learning order paris est des ponts paristech champs sur marne france correspondence martin simonovsky tal construction involves discrete decisions differentiable work propose sidestep hurdles decoder output probabilistic graph predefined maximum size directly probabilistic graph existence nodes edges well attributes modeled independent random variables method formulated framework variational autoencoders vae kingma welling demonstrate method coined graphvae cheminformatics task molecule generation molecular datasets challenging convenient testbed generative model easily allow qualitative quantitative tests decoded samples method applicable generating smaller graphs performance leaves space improvement believe work important initial step towards powerful efficient graph decoders related work graph decoders graph generation largely unexplored deep learning closest work johnson incrementally constructs probabilistic multi graph world representation according sequence input sentences answer query model also outputs probabilistic graph assume prescribed order construction transformations available formulate learning problem autoencoder learns produce scene graph input image construct graph set object proposals provide initial embeddings node edge use message passing obtain consistent prediction contrast method generative model produces probabilistic graph single opaque vector without specifying number nodes structure explicitly related work deep learning includes random graphs erdos albert stochastic blockmodels snijders nowicki state transition matrix learning gong xiang graphvae towards generation small graphs using variational autoencoders figure illustration proposed variational graph autoencoder starting discrete attributed graph nodes representation propylene oxide stochastic graph encoder embeds graph continuous representation given predefined point latent space novel graph decoder outputs probabilistic graph nodes discrete samples may drawn process conditioned label controlled sampling test time reconstruction ability autoencoder facilitated approximate graph matching aligning discrete data decoders text common discrete representation generative models usually trained maximum likelihood fashion teacher forcing williams zipser avoids need backpropagate output discretization feeding ground truth instead past sample step bengio argued may lead expose bias possibly reduced ability recover mistakes recently efforts made overcome problem notably computing differentiable approximation using gumbel distribution kusner bypassing problem learning stochastic policy reinforcement learning work also circumvents problem namely formulating loss probabilistic graph molecule decoders generative models may become promising novo design molecules fulfilling certain criteria able search continuous embedding space olivecrona mind propose conditional version model molecules intuitive representation graphs field resort textual representations fixed syntax smiles strings exploit recent progress made text generation rnns olivecrona segler syntax brittle many invalid strings tend generated recently addressed kusner incorporating grammar rules decoding encouraging approach guarantee semantic chemical validity similarly method method approach task graph generation devising neural network able translate vectors continuous code space graphs main idea output probabilistic graph use standard graph matching algorithm align ground truth proposed method formulated framework variational autoencoders vae kingma welling although forms regularized autoencoders would equally suitable makhzani briefly recapitulate vae continue introducing novel graph decoder together appropriate objective variational autoencoder let graph specified adjacency matrix edge attribute tensor node attribute matrix wish learn encoder decoder map space graphs continuous embedding see figure probabilistic setting vae encoder defined variational posterior decoder generative distribution learned parameters furthermore prior distribution imposed latent code representation regularization use simplistic isotropic gaussian prior whole model trained minimizing upper bound negative log kingma welling log graphvae towards generation small graphs using variational autoencoders first term reconstruction loss enforces high similarity sampled generated graphs input graph second term regularizes code space allow sampling directly instead later dimensionality usually fairly small autoencoder encouraged learn compression input instead learning simply copy given input regularization independent input space reconstruction loss must specifically designed input modality following introduce graph decoder together appropriate reconstruction loss probabilistic graph decoder graphs discrete objects ultimately pose challenge encoding demonstrated recent developments graph convolution networks gilmer graph generation open problem far related task text sequence generation currently dominant approach prediction bowman however graphs arbitrary connectivity clear way linearize construction sequence hand iterative construction discrete structures training without supervision involves discrete decisions differentiable therefore problematic fortunately task become much simpler restrict domain set graphs maximum nodes fairly small practice order tens assumption handling dense graph representations still computationally tractable propose make decoder output probabilistic nodes effectively graph sidesteps problems mentioned probabilistic graphs existence nodes edges modeled bernoulli variables whereas node edge attributes multinomial variables discussed work continuous attributes could easily modeled gaussian variables represented mean variance assume variables independent thus probaeach tensor representation bilistic interpretation specifically predicted adjacency contains node probabilities matrix edge probabilities nodes edge ate indicates class probabilities tribute tensor edges similarly node attribute matrix contains class probabilities nodes decoder deterministic architecture simple perceptron mlp three outputs last layer sigmoid activation function used compute whereas softmax applied obtain respectively test time often interested obtained discrete point estimate note taking argmax result discrete graph less nodes reconstruction loss given particular discrete input graph nodes nodes probabilistic reconstruction evaluation equation requires computation likelihood since particular ordering nodes imposed either matrix representation graphs invariant permutations nodes comparison two graphs hard however approximate graph matching described subsection obtain binary assignment matrix node assigned otherwise knowledge allows map information graphs specifically input adjacency matrix mapped predicted graph xax whereas predicted node attribute matrix slices edge attribute matrix transferred input graph maximum likelihood estimates crossentropy respective variables follows log log log log log log log log log assumed encoded notation formulation considers existence matched unmatched nodes edges attributes matched ones furthermore averaging nodes edges separately shown beneficial training otherwise edges dominate likelihood overall reconstruction loss weighed sum previous terms algorithms canonical graph orderings available mckay piperno vinyals empirically found linearization order matters learning sets log log log log graphvae towards generation small graphs using variational autoencoders graph matching computing differentiable loss goal graph matching find correspondences nodes graphs based similarities node pairs expressed integer quadratic programming problem similarity maximization typically approximated relaxation continuous domain cho use case similarity function defined follows first term evaluates similarity edge pairs second term node pairs iverson bracket note scores consider feature compate existential compatibility ibility empirically led stable assignments training summarize motivation behind equations method aims find best graph matching improve gradient descent loss given stochastic way training deep network argue solving matching step approximately sufficient conceptually similar approach learning output unordered sets vinyals closest ordering training data sought practice looking graph matching algorithm robust noisy correspondences easily implemented gpu batch mode matching mpm cho simple effective algorithm following iterative scheme power methods see appendix details used batch mode similarity tensors amount iterations fixed matching outputs continuous assignment matrix unfortunately attempts directly use instead equation performed badly experiments direct maximization soft discretization softmax gumbel softmax jang therefore discretize using hungarian algorithm obtain strict operation gradient still flow decoder directly loss function training convergence proceeds without problems note approach often taken works object detection stewart set detections need matched set ground truth bounding boxes treated fixed predicted nodes assigned current implementation performs step cpu although gpu version published date nagi details encoder feed forward network graph convolutions ecc simonovsky komodakis used encoder although graph embedding method applicable edge attributes categorical single linear layer filter generating network ecc sufficient due smaller graph sizes pooling used encoder except global one employ gated pooling usual vae formulate encoder probabilistic enforce gaussian distribution last encoder layer outputs features interpreted mean variance allowing sample using trick kingma welling disentangled embedding practice rather random drawing graphs one often desires control properties generated graphs case follow sohn condition encoder decoder label vector associated input graph decoder fed concatenation encoder concatenated every node features graph pooling layer size latent space small decoder encouraged exploit information label limitations proposed model expected useful generating small graphs due growth gpu memory requirements number parameters well matching complexity small decrease quality high values section demonstrate results nevertheless many applications even generation small graphs still useful evaluation demonstrate method task molecule generation evaluating two large public datasets organic molecules zinc application cheminformatics quantitative evaluation generative models images texts troublesome theis difficult measure realness generated samples automated objective way thus researchers frequently resort qualitative evaluation embedding plots however qualitative evaluation graphs unintuitive humans judge unless graphs planar fairly simple graphvae towards generation small graphs using variational autoencoders fortunately found graph representation molecules undirected graphs atoms nodes bonds edges convenient testbed generative models one hand generated graphs easily visualized standardized structural diagrams hand chemical validity graphs well many properties molecule fulfill checked using software packages sanitizemol rdkit simulations makes qualitative quantitative tests possible chemical constraints compatible types bonds atom valences make space valid graphs complicated molecule generation challenging fact single addition removal edge change atom bond type make molecule chemically invalid comparably flipping single pixel number generation problem issue help network application introduce three remedies first make decoder output symmetric predicting upper triangular parts undirected graphs sufficient representation molecules second use prior knowledge molecules connected test time construct maximum spanning tree set probable nodes order include edges discrete pointwise originally third estimate graph even generate hydrogen explicitly let added padding chemical validity check dataset dataset ramakrishnan contains organic molecules heavy non hydrogen atoms distinct atomic numbers bond types set set aside samples testing validation model selection compare unconditional model characterbased generator cvae generator kusner gvae used code architecture kusner baselines adapting maximum input length smallest possible addition demonstrate conditional generative model artificial task generating molecules given histogram heavy atoms label success easily validated setup encoder two graph convolutional layers channels identity connection batchnorm relu followed output formulation equation auxiliary networks single fully connected layer fcl output channels finalized fcl outputting decoder fcls channels batchnorm relu followed parallel triplet fcls output graph tensors set batch size mpm iterations train epochs adam learning rate embedding visualization visually judge quality smoothness learned embedding model may traverse two ways along slice along line former randomly choose two orthonormal vectors sample regular grid pattern induced plane latter randomly choose two molecules label test set interpolate embeddings also evaluates encoder therefore benefits low reconstruction error plot two planes figure frequent label left less frequent label right images show varied fairly smooth mix molecules left image many valid samples broadly distributed across plane presumably autoencoder fit large portion database space right exhibits stronger effect regularization valid molecules tend around center example several interpolations shown figure find meaningful row less meaningful transitions though many samples lines form chemically valid compounds decoder quality metrics quality conditional decoder evaluated validity variety generated graphs given label draw samples compute discrete point estimate decodings arg max let list chemically valid molecules list chemically valid molecules atom histograms equal interested ratios valid accurate furthermore let unique fraction unique correct graphs novel fraction novel graphs define unique novel finally introduced metrics aggregated frequencies labels valid valid freq unconditional decoders evaluated assuming single label therefore valid accurate table see average generated molecules chemically valid case conditional models correct label decoder conditioned larger embedding sizes less regularized demonstrated higher number unique samples lower accuracy conditional graphvae towards generation small graphs using variational autoencoders figure decodings latent space points conditional model sampled random plane within units center coordinates left samples conditioned carbon nitrogen oxygen right samples conditioned carbon nitrogen oxygen color legend figure model decoder forced less rely actual labels ratio valid samples shows less clear behavior likely discrete performance directly optimized models remarkable generated molecules dataset network never seen training looking baselines cvae output valid samples expected gvae generates highest number valid samples low variance less additionally investigate importance graph matching using identity assignment instead thus learning reproduce particular node permutations training set correspond canonical ordering smiles strings rdkit ablated model denoted nogm table produces many valid samples lower variety surprisingly outperforms gvae regard comparison model achieve good performance metrics time likelihood besides metric introduced also report evidence lower bound elbo commonly used vae literature corresponds notation table state mean bounds test set using single sample per graph observe reconstruction loss decrease due larger providing freedom however seems strong correlation elbo valid makes model selection somewhat difficult implicit node probabilities decoder assumes independence node edge probabilities allows isolated nodes edges making use fact molecules connected graphs investigate effect making node probabilities function edge probabilities specifically consider probability node maxb probable edge evaluation table shows clear improvement valid accurate novel metrics conditional unconditional setting however paid lower variability higher reconstruction loss indicates new constraint useful model fully cope zinc dataset zinc dataset irwin contains druglike organic molecules heavy atoms distinct atomic numbers bond types set use split strategy investigate degree scalability unconditional generative model setup setup equivalent wider encoder channels graphvae towards generation small graphs using variational autoencoders figure linear interpolation pairs randomly chosen molecules conditional model color legend encoder inputs green chemically invalid graphs red valid graphs wrong label blue valid correct white decoder quality metrics best model archived valid clearly worse using implicit node probabilities brought improvement comparison cvae failed generated valid sample gvae achieved valid models provided kusner attribute low performance generally much higher chance producing inconsistency number possible edges growing quadratically confirm relationship performance graph size kept graphs larger nodes corresponding zinc obtained valid valid nodes zinc verify problem likely caused proposed graph matching loss synthetically evaluate following matching robustness robust behavior graph matching using similarity function important good performance graphvae study graph matching isolation investigate scalability end add gaussian noise tensor input graph truncating renormalizing keep probabilistic interpretation create noisy version interested quality matching self using noisy assignment matrix advantage naive checking identity invariance permutation equivalent nodes table vary tensor separately report mean accuracies computed fashion losses equation random samples zinc size nodes observe expected fall accuracy stronger noise behavior fairly robust respect increasing fixed noise level sensitive adjacency matrix note accuracies comparable across tables due different dimensionalities random variables may conclude quality matching process major hurdle scalability conclusion work addressed problem generating graphs continuous embedding context variational autoencoders evaluated method two molecular datasets different maximum graph size achieved learn embedding reasonable quality small molecules decoder hard time capturing complex chemical interactions larger molecules nevertheless believe method important initial step towards powerful decoders spark interesting community many avenues follow future work besides obvious desire improve current method example incorporating powerful prior distribution adding recurrent mechanism correcting mistakes graphvae towards generation small graphs using variational autoencoders log elbo valid accurate unique novel cond unconditional table performance conditional unconditional models evaluated mean reconstruction log mean evidence lower bound elbo decoding quality metrics section baselines cvae gvae kusner listed embedding size highest valid nogm cvae gvae log elbo valid accurate unique novel cond uncond table performance conditional unconditional models implicit node probabilities improvement respect table emphasized italics table mean accuracy matching zinc graphs noisy counterparts synthetic benchmark function maximum graph size noise would like extend beyond proof concept applying real problems chemistry optimization certain properties predicting chemical reactions advantage decoder compared smilesbased decoder possibility predict detailed attributes atoms bonds addition base structure might useful tasks autoencoder might also used graph encoders small datasets goh acknowledgments thank shell discussions variational methods shinjae yoo project motivation anonymous reviewers comments references albert emergence scaling random networks science bengio samy vinyals oriol jaitly navdeep shazeer noam scheduled sampling sequence prediction recurrent neural networks nips bowman samuel vilnis luke vinyals oriol dai andrew rafal bengio samy generating sentences continuous space conll bronstein michael bruna joan lecun yann szlam arthur vandergheynst pierre geometric deep graphvae towards generation small graphs using variational autoencoders ing going beyond euclidean data ieee signal processing magazine cho minsu sun jian duchenne olivier ponce jean finding matches haystack strategy graph matching presence outliers cvpr date ketan nagi rakesh hungarian algorithms linear assignment problem parallel computing erdos paul evolution random graphs publ math inst hung acad sci gilmer justin schoenholz samuel riley patrick vinyals oriol dahl george neural message passing quantum chemistry icml goh garrett siegel charles vishnu abhinav hodas nathan chemnet transferable generalizable deep neural network property prediction arxiv preprint rafael duvenaud david miguel jorge hirzel timothy adams ryan automatic chemical design using continuous representation molecules corr gong shaogang xiang tao recognition group activities using dynamic probabilistic networks iccv irwin john sterling teague mysinger michael bolstad erin coleman ryan zinc free tool discover chemistry biology journal chemical information modeling jang eric shixiang poole ben categorical reparameterization corr johnson daniel learning graphical state transitions iclr kingma diederik welling max variational bayes corr ktena sofia ira parisot sarah ferrante enzo rajchl martin lee matthew glocker ben rueckert daniel distance metric learning using graph convolutional networks application functional brain networks miccai kusner matt miguel gans sequences discrete elements gumbelsoftmax distribution corr kusner matt paige brooks miguel grammar variational autoencoder icml landrum greg rdkit cheminformatics url http yujia swersky kevin zemel richard generative moment matching networks icml yujia tarlow daniel brockschmidt marc zemel richard gated graph sequence neural networks corr makhzani alireza shlens jonathon jaitly navdeep goodfellow ian adversarial autoencoders corr mckay brendan piperno adolfo practical graph isomorphism journal symbolic computation issn olivecrona marcus blaschke thomas engkvist ola chen hongming molecular novo design deep reinforcement learning corr ramakrishnan raghunathan dral pavlo rupp matthias von lilienfeld anatole quantum chemistry structures properties kilo molecules scientific data segler marwin kogej thierry tyrchan christian waller mark generating focussed molecule libraries drug discovery recurrent neural networks corr simonovsky martin komodakis nikos dynamic edgeconditioned filters convolutional neural networks graphs cvpr snijders tom nowicki krzysztof estimation prediction stochastic blockmodels graphs latent block structure journal classification jan sohn kihyuk lee honglak yan xinchen learning structured output representation using deep conditional generative models nips stewart russell andriluka mykhaylo andrew people detection crowded scenes cvpr graphvae towards generation small graphs using variational autoencoders theis lucas van den oord bethge matthias note evaluation generative models corr architecture train unregularized section deterministic encoder without term equation vinyals oriol bengio samy kudlur manjunath order matters sequence sequence sets arxiv preprint unconditional models achieve mean test loglikelihood log roughly implicit node probability model significantly higher tables architecture achieve perfect reconstruction inputs successful increase training zero fixed small training sets hundreds examples network could overfit indicates network problems finding generally valid rules assembly output tensors williams ronald zipser david learning algorithm continually running fully recurrent neural networks neural computation danfei zhu yuke choy christopher bongsoo scene graph generation iterative message passing cvpr lantao zhang weinan wang jun yong seqgan sequence generative adversarial nets policy gradient aaai appendix matching section briefly review matching algorithm cho relaxed form continuous correspondence matrix nodes determined based similarities node graphs represented matrix elements pairs sia let denote replica relaxed graph matching problem expressed quadratic programming task arg maxx xia xia optimization strategy choice derived equivalent power method iterative update rule starting correspondences initialized uniform rule iterated convergence use case run fixed amount iterations context graph matching product interpreted match candidates xia xia sia xjb sia denote set neighbors node authors argue formulation strongly influenced uninformative irrelevant elements propose robust version considers best pairwise similarity neighbor xia xia sia xjb sia unregularized autoencoder regularization vae works achieving perfect reconstruction training data especially small embedding sizes understand reconstruction ability | 9 |
universal quantum computing feb michel raymond marcelo klee abstract single qubit may represented bloch sphere similarly goal dress correspondence converting language universal quantum computing uqc magic state pauli group acting define model uqc povm one recognizes povms defined subgroups finite index modular group correspond coverings trefoil knot paper one also investigates quantum information universal knots links knot whitehead link borromean rings making use catalog platonic manifolds available snappy connections povms based uqc obtained dehn fillings explored pacs msc codes keywords quantum computation knot theory branch coverings dehn surgeries manifolds around many guises observers world familiar twomanifolds surface ball doughnut pretzel surface house tree volleyball net may harder understand first actors movers world learn imagine alternate universes introduction mathematical concepts pave way improvements technology far topological quantum computation concerned non abelian anyons proposed attractive fault tolerant alternative standard quantum computing based universal set quantum gates anyons quasiparticles world lines forming braids whether non abelian anyons exist real world would easy create artificially still open discussion topological quantum computing beyond anyons still well developed although shown essay straightforward consequence set ideas belonging standard universal quantum computation uqc simultaneously topology quantum computation mind concepts magic states related valued measures povms michel raymond marcelo klee investigated detail topology starting point consists thurston conjectures theorems topological quantum computing would federate foundations quantum mechanics cosmology recurrent dream many physicists topology already investigated several groups context quantum information high energy physics biology consciousness studies conjecture uqc conjecture elementary deep statement every simply connected closed homeomorphic mind correspondence bloch sphere houses qubits one would desire quantum translation statement one may use picture riemann sphere parallel bloch sphere follow klein lectures icosahedron perceive platonic solids within landscape picture fits well hopf fibrations entanglements described quasicrystals ambitious dress alternative way reproduces historic thread proof conjecture thurston geometrization conjecture conjecture follows dresses homeomorphic wardrobe huge almost every dress hyperbolic thurston found recipes every dress identified thanks signature terms invariants purpose fundamental group job space surrounding knot knot complement example especially interested trefoil knot underlies work first author well knot whitehead link borromean rings universal sense described hyperbolic allows build platonic manifolds manifolds carry quantum geometry corresponding quantum computing possibly informationally complete povms identified earlier work according knot fundamental group universal every closed oriented homeomorphic quotient hyperbolic subgroup finite index knot whitehead link universal catalog finite index subgroups fundamental group corresponding defined coverings easily established degree using software snappy paper first author found may built finite index subgroups modular group associated subgroup index fundamental domain plane signature terms genus elliptic points cusps summarized fig exists relationship modular group trefoil knot since fundamental group knot complement braid group central extension trefoil knot corresponding universal quantum computing figure knot whitehead link borromean rings braid group universal forbids relation finite index subgroups known two coverings manifold fundamental group equivalent exists homeomorphism besides covering uniquely determined subgroup index group inequivalent coverings correspond conjugacy classes subgroups paper fuse concepts attached subgroup index povm possibly informationally complete found thanks appropriate magic state related pauli group factory figure trefoil knot link associated hesse sic link associated minimal informationally complete povms uqc approach minimal informationally complete povms derived michel raymond marcelo klee appropriate fiducial states action generalized pauli group fiducial states also allow perform universal quantum computation povm collection positive operators sum identity measurement state outcome obtained probability given born rule minimal one needs projectors dei rank gram matrix elements precisely means symmetric obeys allows explicit recovery density matrix new minimal whose rank gram matrix hermitian angles discovered states sic equiangular countered considered live cyclotomic field exp gcd greatest common divisor hermitian angle defined deg means field norm pair deg degree extension rational field fiducial states quite difficult derive seem follow algebraic number theory except icpovms derived permutation groups symmetric recovered thanks subgroups index modular group table instance action pauli group state type exp results whose geometry triple products projectors arising congruence subgroup turns correspond commutation graph pauli operators five congruence subgroups point geometry borromean rings see table serves motivation investigating trefoil knot manifold relation uqc corresponding ics important put uqc problem wider frame conjecture thurston geometrization conjecture related ics may also follow hyperbolic seifert shown tables paper organization paper paper runs follows sec deals relationship quantum information seen modular group trefoil knot sec deals platonic related coverings knot whitehead link borromean rings relate known sec describes important role played dehn fillings describing many types may relate topological quantum computing universal quantum computing quantum information modular group related trefoil knot section describe results established terms corresponding coverings trefoil knot complement hom gens link type cyc irr hesse sic cyc irr irr cyc cyc irr reg cyc irr irr irr irr irr irr cyc irr irr irr cyc cyc cyc table coverings degree trefoil knot found snappy related subgroup modular group corresponding applicable right column covering characterized type homology group hom means number cusps number generators gens fundamental group invariant type link represents identified snappy case cyclic coverings corresponds brieskorn explained text spherical groups manifolds given right hand side column let introduce group representation knot complement wirtinger representation finite representation relations form wgi word generators trefoil knot sown fig michel raymond marcelo klee wirtinger representation xyxi equivalently rest paper number coverings manifold corresponding knot displayed ordered list details corresponding coverings table expected coverings correspond subgroups index fundamental group associated trefoil knot cyclic branched coverings trefoil knot let three positive integers brieskorn intersection surface equation shown cyclic covering branched along torus knot link type brieskorn see also sec spherical case group associated brieskorn manifold either dihedral group triples tetrahedral octahedral icosahedral euclidean case corresponds remaining cases hyperbolic cyclic branched coverings spherical groups trefoil knot type identified right hand side column table irregular branched coverings trefoil knot right hand side column table shows subgroups identified table corresponding particular hesse sic already found associated congruence subgroup corresponds link already found associated congruence subgroup corresponds crossing link trefoil knot former two links pictured fig five coverings degree allow construction whose geometry contain picture borromean rings fig corresponding congruence subgroups identified table first two viz define whose fundamental group one link alias borromean rings surgeries slope two cusps see sect topic three coverings leading congruence subgroups quantum information universal knots links pertaining knot fundamental group knot number coverings list universal quantum computing table establishes list corresponding subgroups index universal group manifolds labeled otetnn oriented built tetrahedra index table identification finite index subgroups first obtained comparing cardinality list corresponding subgroup fundamental group tetrahedral manifold snappy table course straightforward way perform task identifying subgroup degree covering full list coverings figure eight knot degree available snappy extra invariants corresponding may found addition lattice branched coverings investigated cyc cyc irr cyc cyc irr irr cyc irr irr irr irr cyc irr irr comment table table found subgroups finite index fundamental group alias coverings terminology column snappy identified made tetrahedra cusps rank povm gram matrix corresponding shows distinct values pairwise products shown let give details results summarized table using magma conjugacy class subgroups index fundamental group represented subgroup three generators two relations follows yxz sequence subgroups finite index found manifold corresponding sequence found snappy alias conjugacy class subgroups index represented michel raymond marcelo klee corresponding manifold alias shown table two conjugacy classes subgroups index corresponding tetrahedral manifolds permutation group organizing cosets permutation group organizing cosets alternating group latter fundamental group figure two platonic leading construction details given tables cardinality sequences subgroups associated follows action pauli group state type exp unity index three types corresponding subgroups tetrahedral manifold sequence associated equianguler table index coverings define six classes two related construction ics index one finds three classes two alias related ics finally index types exist two relying construction index exists distinct shown none leading tetrahedral manifold tetrahedral remarkable sense corresponds subgroup index allows construction corresponding hyperbolic polyhedron taken snappy shown fig universal quantum computing cyc cyc cyc cyc irr irr irr irr irr irr cyc irr irr comment qutrit hesse sic table found subgroups fundamental group associated whitehead link leading listed figure link associated qutrit hesse sic octahedral manifold associated orientable tetrahedral manifolds tetrahedra cusps tetrahedra identified table belong tetrahedral manifold one one cusp table pertaining whitehead link one could also identify substructure another universal object viz whitehead link michel raymond marcelo klee cardinality list corresponding whitehead link group table shows identified index subgroups aggregates octahedra particular one finds qutrit hesse sic built may buid hyperbolic polyhedron latter octahedral manifold taken snappy shown fig former octahedral manifold follows link shown fig corresponding polyhedron taken snappy shown fig hom cyc cyc irr comment hesse sic hesse sic hesse sic table coverings degrees branched along borromean rings identification corresponding hyperbolic column seen right hand side column three types allow build hesse sic pertaining borromean rings corresponding coverings degree branched along borromean rings link hyperbolic link see fig given table identified manifolds hyperbolic octahedral manifolds volume degree degree dehn fillings povms summarize findings previous section started building block knot viz trefoil knot link viz knot whose complement covering used build povm possibly apply kind phase surgery knot link transforms related coverings preserving povms way determined start friend arrive standard historic importance homology sphere alias brieskorn sphere brieskorn sphere seifert fibered toroidal manifold introduce resulting knot later section show use coxeter lattice surgery arrive hyperbolic maximal symmetry whose several coverings related povms close ones trefoil knot let start lens space obtained gluing boundaries two solid tori together meridian first solid torus universal quantum computing name trefoil table surgeries column name column cardinality list alias conjugacy classes subgroups plain characters used point possible construction least one corresponding see sec ics corresponding goes second solid torus wraps around longitude times around meridian times generalize concept knot exterior complement open solid torus knotted like knot one glues solid torus meridian curve goes torus boundary knot exterior operation called dehn surgery according lickorish theorem every closed orientable connected obtained performing dehn surgery link surgeries trefoil knot homology sphere dodecahedral space alias homology sphere first example obtained surgery trefoil knot let three positive integers mutually coprime brieskorn sphere intersection surface equation homology brieskorn sphere sphere brieskorn sphere homeomorphic diffeomorphic sphere may identified homology sphere sphere may obtained surgery table provides sequences corresponding surgeries plain digits sequences point possibility building ics corresponding degree corresponds considerable filtering ics coming instance smallest dimension five precisely one coming congruence subgroup table built non modular fundamental group whose permutation representation cosets alternating group compare sec smallest dimensional derived twovalued one arising congruence subgroup given table arises non modular fundamental group permutation representation cosets seifert fibered toroidal manifold hyperbolic knot link one whose complement endowed complete riemannian metric constant negative curvature hyperbolic geometry finite volume dehn surgery hyperbolic knot exceptional reducible toroidal seifert fibered comprising closed together michel raymond marcelo klee decomposition disjoint union circles called fibers surgeries hyperbolic categories exclusive hyperbolic knot contrast non hyperbolic knot trefoil knot admits toroidal seifert fiber surgery obtained dehn filling smallest dimensional ics built hesse sic obtained congruence subgroup trefoil knot comes non modular fundamental group cosets organized alternating group akbulut manifold exceptional dehn surgery slope knot leads remarkable manifold found context integral homology spheres smoothly bounding integral homology balls apart topological importance find coverings associated already discovered ics coverings fundamental group smallest covering degree occurs integral homology congruence subgroup also found trefoil knot see table next covering degree homology leads type also found trefoil knot next case corresponds hyperbolic manifold hyperbolic manifold closest trefoil knot manifold known found goal search fundamental groups two dimensions maximal symmetry groups called hurwitz groups arise quotients groups three dimensions quotients minimal lattice hyperbolic isometries orientation preserving subgroup min play role hurwitz groups let coxeter group split extension min one index two subgroups presentation min xyz xzyz according corollary subgroups finite index min index divisible two index called obtained fundamental groups surgeries subgroups index min given table remarkable groups fundamental groups oriented built single icosahedron except manifold subgroup table index torsion free subgroups min relation single isosahedron icosahedral symmetry broken see text details universal quantum computing also special sense many small dimensional ics may built contrast groups table smallest ics may build hesse sic coming congruence subgroup coming congruence subgroup ics coming congruence subgroups see sec table higher dimensional ics found come congruence subgroups conclusion relationship universality quantum computing explored work earlier work first author already pointed importance hyperbolic geometry modular group deriving basic small dimensional sec move trefoil knot braid group non hyperbolic could investigated making use coverings correspond povms sec passed universal links knot whitehead link borromean rings related hyperbolic platonic manifolds new models quantum computing based povms finally sec dehn fillings used explore connection quantum computing important exotic toroidal seifert fibered akbulut manifold maximum symmetry hyperbolic manifold slightly breaking icosahedral symmetry expected work importance new ways implementing quantum computing understanding link quantum information theory cosmology funding first author acknowledges support french investissements avenir program project contract ressources came quantum gravity research references thurston geometry topology vol princeton university press princeton planat informationally complete povms entropy hilden lozano montesinos whitten universal groups inventiones mathematicae fominikh garoufalidis goerner tarkaev vesnin census tethahedral hyperbolic manifolds exp math kitaev quantum computation anyons annals phys nayak simon stern freedman das sarma anyons topological quantum computation rev mod phys wang topological quantum computation american mathematical rhode island number pachos introduction topological quantum computation cambridge university press cambridge vijay generalization anyons three dimensions arxiv bravyi kitaev universal quantum computation ideal clifford gates noisy ancillas phys rev planat haq magic universal quantum computing permutations advances mathematical physics michel raymond marcelo klee planat gedik magic informationally complete povms permutations soc open sci kauffman baadhio quantum topology series knots everything world scientific kauffman knot logic topological quantum computing majorana fermions linear algebraic structures quantum computing chubb eskandarian harizanov eds lecture notes logic cambridge univ press seiberg senthil wang witten duality web dimensions condensed matter physics ann phys gang tachikawa yonekura smallest hyperbolic manifolds via simple theories phys rev lim jackson molecular knots biology chemistry phys condens matter irwin toward unification physics number theory https toward unification physics number theory milnor conjecture years later progress report clay mathematics institute annual report http retrieved planat geometry invariants qubits quartits octits int geom methods mod phys manton connections discrete fiber bundles commun math phys mosseri dandoloff geometry entangled states bloch spheres hopf fibrations phys math nieto correspondence mod phys scientific research sen aschheim irwin emergence aperiodic dirichlet space tetrahedral units icosahedral internal space mathematics fang hammock irwin methods calculating empires quasicrystals crystals adams knot book elementary introduction mathematical theory knots freeman new york mednykh new method counting coverings manifold fintely generated fundamental group dokl math culler dunfield goerner weeks snappy computer program studying geometry topology http hilden lozano montesinoos knots universal topology chris fuchs quantumness hibert space quant inf comp appleby chien flammia waldron constructing exact symmetric informationally complete measurements numerical solutions preprint rolfsen knots links mathematics lecture series houston gabai whitehead manifold union two euclidean spaces topol milnor brieskorn manifolds knots groups neuwirth annals math study princeton univ press princeton hempel lattice branched covers knot topol appl haraway determining hyperbolicity compact orientable torus boundary arxiv universal quantum computing ballas danciger lee convex projective structures nonhyperbolic arxiv conder martin torstensson maximal symmetry groups hyperbolic new zealand math gordon dehn filling survey knot theory banach center publ polish acad warsaw kirby scharlemann eight faces homology geometric topology acad press new york seifert fibered surgery montesinos knots arxiv appear comm anal geom akbulut larson brieskorn spheres bounding rational balls arxiv chan zainuddin atan siddig computing quantum bound states triply punctured surface chin phys lett aurich steiner numerical computation maass waveforms application cosmology hyperbolic geometry applications quantum chaos cosmology jens bolte frank steiner eds cambridge univ press preprint smooth quantum gravity exotic smoothness quantum gravity frontier spacetime theory bells inequality machs principle exotic smoothness fundamental theories physics book series ftp institut cnrs umr avenue des montboucons france address quantum gravity research los angeles usa address raymond address klee address marcelo | 4 |
may accurate efficient numerical framework adaptive numerical weather prediction giovanni tumolo luca bonaventura january earth system physics section abdus salam international center theoretical physics strada costiera trieste italy gtumolo mox modelling scientific computing dipartimento matematica brioschi politecnico milano via bonardi milano italy keywords discontinuous galerkin methods adaptive finite elements semiimplicit discretizations discretizations shallow water equations euler equations ams subject classification abstract present accurate efficient discretization approach adaptive discretization typical model equations employed numerical weather prediction approach combined time discretization method spatial discretization based adaptive discontinuous finite elements resulting method full second order accuracy time employ polynomial bases arbitrarily high degree space unconditionally stable effectively adapt number degrees freedom employed element order balance accuracy computational cost approach employed require remeshing therefore especially suitable applications numerical weather prediction large number physical quantities associated given mesh furthermore although proposed method implemented arbitrary unstructured nonconforming meshes even application simple cartesian meshes spherical coordinates cure effectively pole problem reducing polynomial degree used polar elements numerical simulations classical benchmarks shallow water fully compressible euler equations validate method demonstrate capability achieve accurate results also large courant numbers time steps times larger typical explicit discretizations problems reducing computational cost thanks adaptivity algorithm introduction discontinuous galerkin spatial discretization approach currently employed increasing number environmental fluid dynamics models see complete overview motivated many attractive features discretizations high order accuracy local mass conservation ease massively parallel implementation hand methods imply severe stability restrictions coupled explicit time discretizations one traditional approach overcome stability restrictions low mach number problems combination semi implicit semi lagrangian techniques series papers shown computational gains traditionally achieved finite difference models application sisl discretization methods also attainable framework approaches particular introduced dynamically discretization approach low mach number problems quite effective achieving high order spatial accuracy reducing substantially computational cost paper apply technique shallow water equations spherical geometry fully compressible euler equations order show effectiveness model problems typical global regional weather forecasting advective form equations motion employed time discretization based method see combination two robust ode solvers yields second order accurate method see effective damping selectively high frequency modes time achieves full second order accuracy trapezoidal rule typically necessary realistic applications nonlinear problems see limits accuracy time first order numerical results presented paper show total computational cost one step analogous one step trapezoidal rule well structure linear problems solved time step thus allowing extend naturally accurate method implementation based trapezoidal rule numerical simulations shallow water benchmarks proposed benchmarks proposed employed validate method demonstrate capabilities particular shown present approach enables use time steps even times larger allowed models standard explicit schemes see results method presented paper previous version applied principle arbitrarily unstructured even nonconforming meshes example model based method could run non conforming mesh rectangular elements built around nodes reduced gaussian grid simplicity however implementation developed far simple cartesian mesh used degree adaptivity employed results high courant numbers polar regions result special stability problems present sisl discretization approach shown numerical results reported hand even implementation based simple cartesian mesh spherical coordinates flexibility space discretization allows reduce degree basis test functions employed close poles thus making effective model resolution uniform solving efficiency issues related pole problem static especially advantageous conditioning linear system solved time step greatly improved consequence number iterations necessary linear solver reduced approximately time spurious reflections artificial error increases observed beyond computational advantages believe present approach based especially suitable applications numerical weather prediction contrast approaches local mesh coarsening refinement size elements changes time indeed numerical weather prediction information necessary carry realistic simulations orography profiles data land use soil type masks needs reconstructed computational mesh time mesh changed furthermore many physical parameterizations highly sensitive mesh size although devising better parameterizations require less tuning important research goal conventional parameterizations still use quite time consequence useful improve accuracy locally adding supplementary degrees freedom necessary done framework without change underlying computational mesh conclusion resulting modeling framework seems able combine efficiency high order accuracy traditional sisl methods locality flexibility standard approaches section two examples governing equations introduced section method reviewed section approach employed advection vector fields spherical geometry described detail section introduce discretization approach shallow water equations spherical geometry section outline extension fully compressible euler equations vertical plane numerical results presented section section try draw conclusions outline path towards application concepts introduced context non hydrostatic dynamical core governing equations consider basic model problem shallow water equations rotating sphere see equations standard test bed numerical methods applied full equations motion atmospheric oceanic circulation models see among possible solutions admit rossby inertial gravity waves well response waves orographic forcing use advective vector form shallow water equations represents fluid depth bathymetry elevation coriolis parameter unit vector locally normal earth surface gravity force per unit mass earth surface assuming orthogonal curvilinear coordinates sphere portion denote components diagonal metric tensor furthermore set contravariant components velocity vector coordinate direction respectively multiplied corresponding metric tensor components also denote lagrangian derivative particular paper standard spherical coordinates employed example complete model also consider fully compressible non hydrostatic equations motion following written powhere reference pressure value tential temperature exner pressure constant pressure constant volume specific heats gas constant dry air respectively coriolis force omitted simplicity notice also slight abuse notation case denotes three dimensional operators also velocity field assume description two dimensional vertical slice models customary rewrite equations terms perturbations respect steady hydrostatic reference profile assuming one obtains vertical plane observed equations isomorphic equations allow extend almost automatically discretization approach proposed former general model review method review properties called method first introduced given cauchy problem considering time discretization employing constant time step method defined two following implicit stages implicitness parameter immediate first stage simply application trapezoidal rule method interval could also substituted centered cranknicolson step without reducing overall accuracy method outcome stage used turn two step method single step two stages method combination two robust stiff solvers yields method several interesting accuracy stability properties analyzed detail shown paper analysis easily carried rewriting method formulation method clearly singly diagonal implicit runge kutta sdirk method one rely theory class methods derive stability accuracy results see notice method rediscovered analyzed applied also treat implicit terms framework additive runge kutta approach see shown method second order accurate value written method also proven constitute embedded pair companion coefficients given provided centering employed first stage equips method extremely time discretization error furthermore also lstable therefore coefficient value safely applied problems eigenvalues whose imaginary part large typically arise discretization hyperbolic problems case standard trapezoidal rule implicit method whose linear stability region exactly bounded imaginary axis consequence common apply trapezoidal rule centering see well results first order time discretization appears therefore interesting one step alternative maintain full second order accuracy especially considering formulated equivalent performing two steps slightly modified coefficients order highlight advantages proposed method terms accuracy respect common robust stiff solvers plot figure contour levels absolute value linear stability function method without centering first stage compared analogous contours centered method averaging parameter figures respectively method figure immediate see introduces less damping around imaginary axis moderate values time step hand selective damping large eigenvalues clearly displayed figure absolute values linear stability functions methods exception explicit representation stability function available plotted along imaginary axis figure contour levels absolute value stability function method without centering first stage contour spacing figure contour levels absolute value stability function centered method averaging parameter equivalent centering parameter valued contour spacing figure contour levels absolute value stability function centered method averaging parameter equivalent centering parameter valued contour spacing figure contour levels absolute value stability function method contour spacing figure graph absolute value stability functions several methods along imaginary axis review evolution operators vector fields sphere method described introducing concept evolution operator along lines indeed let denote generic function space time solution approximate solution time interval numerical evolution operator introduced approximates exact evolution operator associated frozen velocity field may coincide velocity field time level extrapolation derived previous time levels precisely denotes solution initial datum time expression denotes numerical approximation notation used since nothing position time fluid parcel reaching location time according standard terminology called departure point associated arrival point different methods employed approximate paper simplicity method proposed employed spherical geometry furthermore guarantee accuracy compatible extrapolation velocity field intermediate time level used hand application cartesian geometry vertical slice discretization simple first order euler method employed see case advection vector field momentum equation extension approach take account curvature spherical manifold specifically unit basis vectors departure point general aligned arrival point represent unit vector triad general deal issue two approaches available first intrinsically eulerian consists introduction christoffel symbols covariant derivatives definition giving rise well known metric terms sisl discretization approximation along trajectories metric terms approach shown source instabilities semilagrangian frame see therefore adopted work second approach suitable discretizations takes account curvature manifold discrete level sisl discretization performed many variations idea proposed see derived unified way introduction proper rotation matrix transforms vector components unit vector triad vector components unit vector triad see rotation matrix comes play sufficient consider action evolution operator given vector valued function space time defined approximation write equation componentwise known components departure point unit vector triad gxn gyn gzn therefore via components unit vector triad point given projection along gxn gyn gzn gxn gyn gzn gxn gyn gzn matrix notation gzn shallow atmosphere approximation reduced rotation matrix shown therefore following evolution operator vector fields defined componentwise gxn gyn novel sisl time integration approach shallow water equations sphere sisl discretization equations based obtained performing two stages reinterpretation intermediate values fashion furthermore order avoid solution nonlinear system dependency linearized time common discretizations based trapezoidal rule see numerical experiments reported following show prevent achieve second order accuracy regimes interest numerical weather prediction stage sisl time equations vector form given stage followed stage two stages spatial discretization performed along lines described allowing variable polynomial order locally represent solution element spatial discretization approach considered independent nature mesh could also implemented fully unstructured even non conforming meshes simplicity however paper implementation structured mesh coordinates developed principle either lagrangian hierarchical legendre bases could employed work almost exclusively hierarchical bases provide natural environment implementation algorithm see example central issue finite element formulations fluid problems choice appropriate approximation spaces velocity pressure variables context swe role pressure played free surface elevation inconsistent choice two approximation spaces indeed may result solution polluted spurious modes specific case swe see example well recent comprehensive analysis investigated issue depth model implementation allows approximations higher polynomial degree velocity fields height field even though systematic study performed significant differences noticed results obtained equal unequal degrees following results unequal degrees reported exception empirical convergence test steady geostrophic flow integrals appearing elemental equations evaluated means gaussian numerical quadrature formulae number quadrature nodes consistent local polynomial degree used particular notice integrals terms image evolution operator functions evaluated departure points trajectories arriving quadrature nodes computed exactly see since functions polynomials therefore sufficiently accurate approximation integrals needed may entail need employ numerical quadrature formulae nodes minimal requirement implied local polynomial degree overhead actually compensated fact gauss node computation departure point executed quantities interpolated spatial discretization performed discrete degrees freedom representing velocity unknowns replaced respective discrete height equations yielding case linear system whose structure entirely analogous obtained linear systems obtained stages solved implementation gmres method classical stopping criterion based relative error tolerance employed see gmres solver far block diagonal preconditioning employed shown section condition number systems solved greatly reduced lower degree elements employed close poles case total computational cost one step entirely analogous one step standard centered trapezoidal rule employed since structure systems stage fraction time step computed computed solving linear system recovered back substituting momentum equation extension time integration approach euler equations section show previously proposed method extended seamlessly fully compressible euler equations formulated equations simplicity application two dimensional vertical slice case presented extension three dimensions straightforward order avoid solution nonlinear system dependency dependency linearized time common discretizations based trapezoidal rule see counterpart substep first applied obtain following time energy equation inserted time vertical momentum equation order decouple momentum energy equations follows equations set three equations three unknowns namely compared equations cartesian geometry comparison clear two formulations isomorphic correspondence consider counterpart substep applied obtain following time energy equation inserted time vertical momentum equation order decouple momentum energy equations equations set three equations three unknowns namely compared equations cartesian geometry easy see also case exactly structure results equations correspondence approach code proposed shallow water equations extended fully compressible euler equation straightforward way numerical experiments numerical method introduced section implemented tested number relevant test cases using different initial conditions bathymetry profiles order assess accuracy stability properties analyze impact strategy whenever reference solution available relative errors computed norms final time simulation according href href href max href max href denotes reference solution model variable discrete approximation global integral computed appropriate numerical quadrature rule consistent numerical approximation tested maximum computed nodal values test cases considered shallow water equations spherical geometry geostrophic flow particular analyzed results test case configuration least favorable methods employing meshes unsteady flow exact analytical solution described polar rotating introduced aimed showing problems arise even case strong cross polar flows zonal flow isolated mountain wave wavenumber corresponding respectively test cases first two tests analytic solutions available empirical convergence tests performed test cases considered discretization equations inertia gravity waves involving evolution potential temperature perturbation channel periodic boundary conditions uniformly stratified environment constant frequency described rising thermal bubble given evolution warm bubble constant potential temperature environment described numerical experiments performed paper neither spectral filtering explicit diffusion kind employed numerical diffusion implicit time discretization approach yet investigated extent quality solutions affected choice taken account comparing quantitatively results present method reference models one described explicit numerical diffusion added sensitivity comparison results amount numerical diffusion highlighted several model validation exercises see since methods efficient low froude number flows typical velocity much smaller fastest propagating waves tests considered fall hydrodynamical regime therefore order assess method efficiency distinction made maximum courant number based velocity one hand hand maximum courant number based celerity maximum courant number based sound speed defined respectively cvel max csnd max ccel max interpreted generic value meshsize either coordinate direction tests employed pni denotes local polynomial degree used timestep represent model variable inside element mesh pmax maximum local polynomial degree considered efficiency method reducing computational effort measured monitoring evolution quantities itnnadapt iter pmax itnnmax total number elements itnnadapt denotes total number gmres iterations time step adapted local degrees configuration itnnmax total number gmres iterations time step configuration maximum degree elements respectively average values indicators simulations performed reported following denoted respectively error adaptive iter dof solution corresponding one obtained uniform maximum polynomial degree everywhere measured terms finally cases conservation global invariants monitored evaluating time step following global integral quantities defined density associated global invariant according choice following invariants considered mass qmass total energy qenerg potential enstrophy qenstr geostrophic flow first consider test case solution steady state flow velocity field corresponding zonal solid body rotation field obtained velocity ones geostrophic balance parameter values taken flow orientation parameter chosen making test challenging mesh error norms associated solution obtained mesh elements different polynomial degrees shown tables respectively results computed days fixed maximum courant numbers ccel cvel different values employed different polynomial order remark resulting time steps significantly larger allowed typical explicit time discretizations analogous space discretizations see results spectral decay error norms clearly observed time error becomes dominant better comparison results consider configuration elements corresponds resolution space grid used used giving proposed sisldg formulation run case average number iterations required linear solver substep substep table relative errors different polynomial degrees swe test case time days table relative errors different polynomial degrees swe test case time days another convergence test performed increasing number elements correspondingly decreasing value time step case maximum courant numbers vary mesh inhomogeneity ccel cvel results reported tables respectively empirical convergence order based norm errors also estimated showing stationary test convergence rates second order time discretization achieved table relative errors different polynomial degrees swe test case time days table relative errors different number elements swe test case time days table relative errors different number elements swe test case time days table relative errors different number elements swe test case time days unsteady flow analytic solution second time dependent test analytic solution derived employed assess performance proposed discretization specifically analytic solution defined formula used since exact solution periodic initial profiles also correspond exact solution integer number days later proposed sisldg scheme integrated days meshes increasing number elements time step decreased accordingly case maximum courant numbers vary mesh dishomogeneity ccel cvel error norms integrations computed days displayed tables empirical order estimation shows full second order accuracy time attained table relative errors different resolutions test case table relative errors different resolutions test case comparison analogous errors computed discretization parameters employing centered crank nicolson method resulting improvement errors scheme crank nicolson achieved essentially equivalent computational cost terms total cpu time employed table relative errors different resolutions test case table relative errors different resolutions test case centered crank nicolson table relative errors different resolutions test case centered crank nicolson table relative errors different resolutions test case centered crank nicolson zonal flow isolated mountain performed numerical simulations reproducing test case given zonal flow impinging isolated mountain conical shape geostrophic balance broken orographic forcing results development planetary wave propagating around globe plots fluid depth well velocity components days shown figures resolution used corresponds mesh elements giving courant number ccel elements close poles observed main features flow correctly reproduced particular significant gibbs phenomena detected vicinity mountain even initial stages simulation figure field days isolated mountain wave test case ccel contour lines spacing evolution time global invariants simulation shown figures respectively error norms different resolutions corresponding ccel computed days displayed tables respect reference solution given national center atmospheric research ncar spectral model resolution apparent second order proposed sisldg scheme time since observed national center atmospheric research ncar spectral model incorporates diffusion terms figure field days isolated mountain wave test case ccel contour lines spacing governing equations proposed sisldg scheme employ diffusion terms filtering smoothing topography test seemed appropriate compute relative errors respect ncar spectral model solution earlier time days assumed effects diffusion less impact error norms computed days different resolutions corresponding ccel displayed tables min table relative errors different resolutions isolated mountain wave test case days finally mountain wave test case run mesh elements either static static plus dynamic adaptivity tolerance dynamic adaptivity set results reported terms error norms respect nonadaptive solution maximum uniform figure field days isolated mountain wave test case ccel contour lines spacing min table relative errors different resolutions isolated mountain wave test case days resolution terms efficiency gain measured saving number linear solver iterations per well iter saving number degrees freedom actually used per timestep results summarized tables dof use static adaptivity resulted iter average use static dynamic adaptivity led distribution iter dof statically dynamically adapted local polynomial degree used represent solution days shown figure noticed even days higher polynomial degrees still automatically concentrated around location mountain table relative errors different resolutions isolated mountain wave test case days table relative errors different resolutions isolated mountain wave test case days adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution isolated mountain wave test case field adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution isolated mountain wave test case field qmass qenerg days days qentsr days figure integral invariants evolution mass energy potential enstrophy isolated mountain wave test case ccel adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution isolated mountain wave test case field figure statically dynamically adapted local distribution days isolated mountain wave test case wave considered test case initial datum consists wave wave number case actually concerns solution nondivergent barotropic vorticity equation exact solution system discussion stability profile solution see plots fluid depth well velocity components days shown figures resolution used corresponds mesh elements giving courant number ccel elements close poles observed main features flow correctly reproduced figure field days wave test case ccel contour lines spacing evolution time global invariants simulation shown figures respectively error norms different resolutions corresponding ccel computed days displayed tables respect reference solution given national center atmospheric research ncar spectral model resolution apparent second order proposed sisldg scheme time unlike ncar spectral model proposed sisldg scheme employ explicit numerical diffusion finally wave test case run figure field days wave test case ccel contour lines spacing min table relative errors different resolutions wave test case mesh elements either static static plus dynamic adaptivity tolerance dynamic adaptivity set results reported terms error norms respect nonadaptive solution maximum uniform resolution terms efficiency gain measured saving number linear solver iterations per iter well saving number degrees freedom actually used per timestep results summarized tables dof use static adaptivity resulted iter use static dynamic adaptivity dof average average distribution led statically dynamically adapted local polynomial degree used represent solution days shown figure noticed even days even maximum allowed figure field days wave test case ccel contour lines spacing min table relative errors different resolutions wave test case use adaptivity criterion leads use cubic polynomials local representation adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution wave test case field adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution wave test case field adaptivity static static dynamic table relative errors statically statically plus dynamically adaptive nonadaptive solution wave test case field qmass qenerg days days qentsr days figure integral invariants evolution mass energy potential enstrophy wave test case ccel figure statically dynamically adapted local distribution days test case nonhydrostatic inertia gravity waves section consider test case proposed consists set waves propagating channel uniformly stratified reference atmosphere characterized constant frequency domain initial boundary conditions identical initial perturbation potential temperature radiates symmetrically left right superimposed mean horizontal flow remain centered around initial position contours potential temperature perturbation horizontal velocity vertical velocity time shown figures respectively computed results compare well structure displayed analytical solution linearized equations proposed numerical results obtained numerical methods see remarked experiment elements timestep used corresponding courant number csnd figure contours perturbation potential temperature internal gravity wave test figure contours horizontal velocity internal gravity wave test figure contours vertical velocity internal gravity wave test rising thermal bubble nonlinear nonhydrostatic experiment consider section test case proposed consists evolution warm bubble placed isentropic atmosphere rest data contours potential temperature perturbation different times shown figure results obtained using elements timestep corresponding courant number csnd figure contours every zero contour omitted perturbation potential temperature rising thermal bubble test time min min min min respectively clockwise sense conclusions future perspectives introduced accurate efficient discretization approach typical model equations atmospheric flows extended spherical geometry techniques proposed combining approach time discretization method spatial discretization based adaptive discontinuous finite elements resulting method unconditionally stable full second order accuracy time thus improving standard trapezoidal rule discretizations without major increase computational cost loss stability allowing use time steps times larger required stability explicit methods applied corresponding discretizations method also arbitrarily high order accuracy space effectively adapt number degrees freedom employed element order balance accuracy computational cost approach employed require remeshing especially suitable applications numerical weather prediction large number physical quantities associated given mesh furthermore although proposed method implemented arbitrary unstructured nonconforming meshes like reduced gaussian grids employed spectral transform models even applications simple cartesian meshes spherical coordinates approach cure effectively pole problem reducing polynomial degree polar elements yielding reduction computational cost comparable achieved reduced grids numerical simulations classical shallow water nonhydrostatic benchmarks employed validate method demonstrate capability achieve accurate results even large courant numbers reducing computational cost thanks adaptivity approach proposed numerical framework thus provide basis accurate efficient adaptive weather prediction system acknowledgements research work supported financially abdus salam international center theoretical physics earth system physics section extremely grateful filippo giorgi ictp strong interest work continuous support financial support also provided project sviluppi teorici applicativi dei metodi politecnico milano would also like acknowledge useful conversations topics paper erath giraldo restelli wood references baldauf brdar analytic solution linear gravity waves channel test numerical models using nonhydrostatic compressible euler equations quarterly journal royal meteorological society bank coughran fichtner grosse rose smith transient simulation silicon devices circuits ieee transactions electron bates semazzi higgins barros integration shallow water equations sphere using vector scheme multigrid solver monthly weather review bonaventura scheme using height coordinate nonhydrostatic fully elastic model atmospheric flows journal computational physics bonaventura redler budich earth system modelling algorithms code infrastructure optimisation springer verlag new york butcher chen new type rungekutta method applied numerical mathematics casulli cattani stability accuracy efficiency method shallow water flow computational mathematics applications lagrange multiplier approach metric terms models sphere quarterly journal royal meteorological society staniforth scheme spectral models monthly weather review cullen test integration technique fully compressible model quarterly journal royal meteorological society davies cullen malcolm mawson staniforth white wood new dynamical core met office global regional modelling atmosphere quarterly journal royal meteorological society dawson westerink feyen pothina continuous discontinuous coupled galerkin finite element methods shallow water equations international journal numerical methods fluids desharnais robert errors near poles generated integration scheme global spectral model dumbser casulli staggered spectral discontinuous galerkin scheme shallow water equations applied mathematics computation gill dynamics academic press giraldo trajectory computations spherical geodesic grids cartesian space monthly weather review giraldo hesthaven warburton discontinuous galerkin methods spherical shallow water equations journal computational physics giraldo kelly constantinescu implicitexplicit formulations nonhydrostatic unified model atmosphere numa siam journal scientific computing giraldo restelli timeintegrators triangular discontinuous galerkin oceanic shallow water model international journal numerical methods fluids hortal simmons use reduced gaussian grids spectral models monthly weather review hosea shampine analysis implementation applied numerical mathematics hack williamson spectral transform solutions shallow water test set journal computational physics carpenter droegemeier woodward hane application piecewise parabolic method ppm meteorological modeling monthly weather review kelley iterative methods linear nonlinear equations siam philadelphia kelly giraldo continuous discontinuous galerkin methods scalable nonhydrostatic atmospheric model mode journal computational physics kennedy carpenter additive schemes equations applied numerical mathematics lambert numerical methods ordinary differential systems wiley giraldo handorf dethloff discontinuous galerkin method shallow water equations spherical triangular coordinates journal computational physics handorf dethloff unsteady analytical solutions spherical shallow water equations journal computational physics roux spurious inertial oscillations models journal computational physics roux carey analysis discontinuous galerkin linearized system international journal numerical methods fluids leveque finite difference methods ordinary partial differential equations problems society industrial applied mathematics mcdonald bates integration gridpoint shallow water model sphere monthly weather review mcgregor economical determination departure points models monthly weather review morton analysis finite volume methods evolutionary problems siam journal numerical analysis morton priestley stability scheme inexact integration rairo modellisation matemathique analyse numerique morton methods supraconvergence numerische mathematik nair thomas loft discontinuous galerkin global shallow water model monthly weather review nair thomas loft discontinuous galerkin transport scheme cubed sphere monthly weather review priestley exact projections method realistic alternative quadrature journal computational physics restelli bonaventura sacco discontinuous galerkin method scalar advection incompressible flows journal computational physics restelli giraldo conservative discontinuous galerkin formulation equations nonhydrostatic mesoscale modeling siam journal scientific computing ripodas gassmann majewski giorgetta korn kornblueh wan bonaventura heinze icosahedral shallow water model icoswm results shallow water test cases sensitivity model parameters geoscientific model development ritchie application method spectral model shallow water equations monthly weather review rosatti bonaventura cesari semilagrangian environmental modelling cartesian grids cut cells journal computational physics saad schultz gmres generalized minimal residual algorithm solving nonsymmetric linear systems siam journal scientific statistical computing skamarock klemp efficiency accuracy technique monthly weather review staniforth white wood treatment vector equations momentum equation quarterly journal royal meteorological society temperton hortal simmons global spectral model quarterly journal royal meteorological society thuburn numerical simulations waves tellus thuburn white geometrical view approximation application semilagrangian departure point calculation quarterly journal royal meteorological society tumolo bonaventura restelli discontinuous galerkin method shallow water equations journal computational physics january walters numerically induced oscillations approximations equations international journal numerical methods fluids walters carey analysis spurious oscillation modes equations computers fluids williamson drake hack jacob swarztrauber standard test set numerical approximations shallow water equations spherical geometry journal computational physics zienkiewicz kelly hierarchical concept finite element analysis computers structures | 5 |
fundamental diagram rail transit application dynamic assignment aug toru kentaro daisuke august abstract urban rail transit often operates high service frequencies serve heavy passenger demand rush hours operations delayed train congestion passenger congestion interaction two delays problematic many transit systems become amplified interactive feedback however tractable models describe transit systems dynamical delays making difficult analyze management strategies congested transit systems general solvable ways fill gap article proposes simple yet physical dynamic models urban rail transit first fundamental diagram transit system relation among analytically derived considering physical interactions delays congestion based microscopic operation principles macroscopic model transit system demand supply developed continuous approximation based fundamental diagram finally accuracy macroscopic model investigated using microscopic simulation applicable range model confirmed keywords public transport rush hour fundamental diagram kinematic wave theory mfd dynamic traffic assignment introduction urban rail transit metro systems plays significant role handling transportation needs metropolitan areas vuchic notable usage morning commute heavy passenger demand focused short time period obtain general policy implications management strategies transit systems pricing gating scheduling many studies theoretically analyzed situations certain simplifications static travel time transit operations cea tabuchi kraus yoshida tian gonzales daganzo trozzi palma known urban mass transit often suffers delays caused congestion even serious incidents accidents occur kato tirachini kariyazaki means dynamical aspect transit systems important periods congestion corresponding author tokyo institute technology meguro tokyo japan institute industrial science university tokyo komaba meguro tokyo japan tokyo institute technology meguro tokyo japan instance tokyo metropolitan area tma one populated regions world rail transit systems essential operated high service frequency trains per hour per line headway two minutes serve heavy passenger demand peak hours kariyazaki unfortunately even accidents chronic delays occur almost daily passengers experience longer unreliable travel times due congestion example mean delay one major transit lines tokyo rush hour eight minutes whereas standard deviation delay two minutes iwakura kariyazaki estimated typical weekday three million commuters across entire tma experience delays social cost caused delay corresponds billion japanese yen approximately billion usd per year appropriate management strategies solve issue therefore desirable general following types congestion observed urban rail transit congestion involving consecutive trains using tracks also known delay carey congestion passengers station platforms namely bottleneck congestion doors train stopped station wada kariyazaki two types congestion interact cause delay newell potts wada kato tirachini kariyazaki cuniasse example prolong time train spends station extended dwell time interrupts operation subsequent trains causes times high service frequency passenger throughput deteriorates occurs passenger congestion stations vicious cycle extreme case known bunching newell potts cuniasse reported production loss phenomena occur almost daily railway system due congestion moreover long term chronic delays could affect passengers departure time choice kato observed phenomenon tma kim also reported route choice metro passengers affected congestion crowding delay therefore congestion dynamics affect dynamics transit systems reason would preferable consider dynamical aspects transit systems order obtain general policy implications transit management heavy demand rush hours similar road traffic congestion problems dynamic traffic assignment szeto iryo however authors knowledge study investigated problems transit systems aforementioned theoretical studies transit commuting cea others travel time transit system assumed constant determined static models meaning dynamical aspect neglected one reason might tractable models transit systems consider dynamics delay fill gap article proposes tractable models dynamics urban rail transit considering physical interaction differs results discomfort due standing crowding necessarily cause delay directly operation models proposed considering detailed mechanism delay congestion see vuchic koutsopoulos wang parbo cats alonso references therein used develop efficient operation schemes however purposes optimization evaluation would difficult use obtain general policy implications management strategies essentially complex intractable remainder article organized follows section simple tractable operation model rail transit formulated considering interaction model describes theoretical relation among ideal fundamental diagram section macroscopic loading model transit system demand supply change dynamically developed based proposed model based continuous approximation approach widely used automobile traffic model model called macroscopic describes aggregated behavior trains passengers certain spatial domain section approximation accuracy properties proposed macroscopic model investigated comparison results microscopic simulation section concludes article fundamental diagram rail transit system section analytically derive rail transit system based microscopic operation principles defined relation among assumptions rail transit system assume two principles rail transit operation namely train dwell behavior station passenger boarding cruising behavior railroad note equivalent employed wada passenger boarding time modeled using bottleneck model passenger boarding dwelling train assumed constant buffer time time required door dwell time dwell time train station expressed number passengers waiting board train passengers waiting train station assumed board first train arrives means passenger storage capacity train assumed unlimited cruising behavior train modeled using newell simplified model newell special case road traffic flow model richards model lighthill whitham richards model vehicle travels fast possible maintaining minimum safety clearance specifically let position train time described min indicates preceding train physical minimum headway time similar reaction time vehicle speed maximum speed minimum spacing parameter also interpreted total number passengers getting train fact general definition preferable sense however complicate following discussions thus neglect passengers getting train note carefully distinguishing two types passengers following discussions valid affect final results first term min operation indicates traffic regime train travel maximum speed second term indicates traffic congested regime train catches preceding one required decrease speed maintain safety headway distance simultaneously critical regime train speed train catches preceding one without loss generality introduce variable buffer headway time describe traffic regime steady state rail transit system consider steady state rail transit operation assumptions stated section steady state idealized traffic state change time traffic state variables typically combination flow density speed characterized certain relation called traffic flow daganzo case rail transit operation steady state defined state satisfies following conditions model parameters namely constant distance adjacent stations constant headway time successive trains constant cruising speed trains arriving station platform additionally assume trains stop every station order operate transit system assumed otherwise passenger boarding never end steady state dwell time train station represented transformed equal note control strategy transit operation need specified reasonable control strategies follow steady operation disturbance otherwise train bunching occur wada transit systems different steady states illustrated diagrams fig horizontal axis indicates time day vertical axis indicates space curves indicate train trajectories train arrives departs station travels station cruising speed finally arrives station different conditions fig speed equal speed greater zero therefore state classified regime fig speed equal equal zero therefore state classified critical regime fig speed less therefore state classified congested regime fundamental diagram general following considered traffic state variables rail transit system space train train station station time regime space train train station station time critical regime space train station train station time congested regime figure diagrams rail transit system steady states among three independent variables example combination identities continuum flow namely identity suppose relation among independent variables traffic state every steady state expressed using function function regarded rail transit system rail transit operation principle follows eqs function specified represent respectively critical state derivation eqs see appendix discussions features fundamental diagram following features derived analytically note easily found numerical example fig interpreted function determines given supply demand given technical parameters transit system although equation looks complicated represents simple relation namely piecewise linear triangular relation fixed mentioned traffic state transit system categorized three regimes critical congested standard traffic flow theory therefore critical given train traffic regime critical regime congested regime otherwise congested regime considered inefficient compared regime congested regime takes time transport amount passengers critical regime efficient terms travel time well number passengers per train however critical regime requires trains higher density regime therefore may efficient operation cost taken account mean speed differs cruising speed former takes dwelling time station cruising stations account whereas latter considers cruising time even critical regime mean speed inversely proportional passenger demand means travel time increases passenger demand increases addition size feasible area narrows increases thus operational flexibility transit system declines passenger demand increases flow density critical regime satisfy following relations assumed therefore critical regime represented straight line whose slope either positive negative plane implies qualitative difference transit systems specifically slope positive transit operation constant would transition regime congested regime passenger demand increases fig contrary slope negative operation would transition congested passenger demand decreases seems paradoxical actually reasonable operational efficiency degraded number trains excessive compared passenger demand note eqs consistent edie generalized definition edie traffic states therefore consistent fundamental definition traffic transit operation edie traffic state derived one easily confirm eqs satisfy equation numerical example ease understanding numerical example shown fig parameter values presented table figure horizontal axis represents vertical axis represents plot color represents slope straight line traffic state origin represents mean speed state features described section easily confirmed example figure read follows suppose passenger demand per station number trains transit system given resulting train traffic mean speed traffic state regime congested state corresponding state aforementioned state corresponding congested state critical state notice state fastest average speed given relations derived applying edie definition minimum component diagram steady state area fig whose vertexes points train departs station train arrives station iii train arrives station train departs station figure numerical example table parameters numerical example parameter value passenger demand triangular relation mentioned clearly shown figure left edge triangle corresponds regime top vertex corresponds critical regime right edge corresponds congested regime validity assumptions discuss relation operation principles proposed actual transit system first worth mentioning parameters proposed model explicit physical meaning therefore parameter calibration required approximate actual transit system relatively simple train cruising model train assumed maintain headway greater given minimum headway reasonable similar models minimum headways used existing studies carey higgins kozan huisman analyze effect train congestion delay additionally model considered moving block control one standard operation schemes trains wada presence adaptive control strategies control daganzo wada steady state likely realized aim adaptive control usually eliminate words control makes operation steady passenger boarding model namely bottleneck model coarse approximation actual phenomena would fairly reasonable capacity bottleneck ordinary pedestrian flow often considered constant lam hoogendoorn daamen however observational studies reported heavily crowded conditions boarding time could increase nonlinearly passenger numbers increase probably due interference passengers lack space carriages harris tirachini moreover stock capacity passengers proposed model therefore states excessively large small might correspond unrealistic situations limitation current model nevertheless scale derived model represents number passengers per train relation macroscopic fundamental diagram proposed resembles macroscopic fundamental diagram mfd geroliminis daganzo daganzo extensions geroliminis chiabaut similar following sense first consider dynamic traffic second describe relations among macroscopic traffic state variables traffic necessarily steady homogeneous local scale use aggregations based edie definition third unimodal relations meaning congested regimes former higher performance latter addition critical regime throughput maximized therefore expected existing approaches mfd applications modeling control optimization transport systems daganzo geroliminis levinson geroliminis fosgerau also suitable proposed transit however substantial differences proposed existing concepts comparison original mfd geroliminis daganzo daganzo railway variant cuniasse proposed additional dimension comparison mfd geroliminis describes relations among total traffic flow car density bus density traffic network proposed explicitly models physical interaction among three variables comparison passenger mfd chiabaut describes relation passenger flow passenger density passengers choose travel car bus proposed passenger demand degrade performance speed vehicles inclusion boarding time dynamic model based fundamental diagram recall proposed describes relationship among traffic variables steady state therefore behavior dynamical system demand supply change time described feature road traffic mfds section formulate model urban rail transit operation demand supply change dynamically proposed model individual train passenger trajectories explicitly described therefore model called macroscopic proposed model based model merchant nemhauser carey mccartney proposed employed function words transit system considered system illustrated fig modeling approach often employed traffic approximations analysis using mfds optimal control avoid congestion daganzo analyses user equilibrium social optimum morning commute problems geroliminis levinson fosgerau advantage approach may possible conduct mathematically tractable analysis dynamic complex transportation systems detailed traffic dynamics difficult model tractable case transit operations train cumulative passenger cumulative railway system internal average dynamics internal average travel time train cumulative passenger determined model cumulative figure railway system system formulation time let inflow trains transit system inflow passengers outflow trains transit system outflow passengers set initial time therefore let cumulative values respectively let travel time train entered system time let initial value given travel time simplify formulation trip length passengers assumed equal means travel time trains passengers functions interpreted follows trains departure rate origin station time passengers arrival rate platform origin station time trains arrival rate final destination station time passengers arrival rate destination station time travel time train passengers origin departs time destination note arrival time destination therefore reality determined transit operation plan passenger departure time choice respectively endogenously determined operational dynamics accordance modeling train traffic modeled follows first exit flow assumed assumption reasonable average trip length shared trains passengers different modification ratio average trip length passengers trains would useful function considered means dynamics transit system modeled taking conservation trains account follows dlk represents length transit route model employed several studies represent macroscopic behavior transportation system merchant nemhauser carey mccartney daganzo note average defined consistent based functions equations eqs sequentially words train traffic computed using initial boundary conditions model based passenger traffic derived follows definition travel time trains holds already obtained travel time holds computed computed definition travel time passengers also discussion proposed macroscopic model computes train passenger based function initial boundary conditions notable feature model highly tractable based model therefore expect proposed model useful analyzing various management strategies transit systems dynamic pricing morning commute proposed model accurately approximate macroscopic behavior transit operation operation small headway time moderate changes demand supply models reasonable changes inflow moderate compared relaxation time dynamical system situations often occur busy metropolitan subway systems suffer congestion delay rush hours heavy demand model may useful investigating congestion problems however accuracy model expected decrease operation low steadiness event train bunching next section quantitative accuracy model verified using numerical experiments model also derive social cost benefit transit system example generalized travel cost passengers travel time schedule delay crowding disutility calculated addition operation cost transit system calculated parameters considered sum number passengers boarding alighting mentioned note simply define equal model also computable using similar procedure verification macroscopic model section verify quantitative accuracy macroscopic model comparing results microscopic model eqs validity macroscopic model investigated comparing solutions microscopic models using initial boundary conditions model parameters simulation setting parameter values transit operation listed table microscopic macroscopic models railroad considered corridor stations equally spaced intervals total stations trains enter railroad flow microscopic model discrete train enters railroad upstream boundary station integer part incremented microscopic model trains leave railroad downstream boundary station without restrictions passenger boarding minimum headway clearance passengers arrive station flow functions exogenously determined mimic morning rush hours peak flow peak time increases monotonically whereas flow peak time decreases words considered functions specified values given scenario parameters simulation duration set baseline scenario section sensitivity analysis section reason explained later microscopic model without control asymptotically unstable proven wada means demand supply always cause train bunching making experiment unrealistic useless therefore control scheme proposed wada implemented microscopic model prevent bunching stabilize operation scheme two control measures holding extending dwell time increase speed similar daganzo former activated train following train delayed represented increase microscopic model latter activated train delayed represented increase maximum allowable speed vmax experiment vmax set control scheme considered realistic reasonable similar operations executed practice see appendix details control scheme results first examine well proposed model reproduces behavior transit system conditions results baseline scenario presented section figure result microscopic model baseline scenario train passenger figure result macroscopic model baseline scenario sensitivity analysis conditions conduced applicable ranges proposed model investigated section baseline scenario baseline scenario parameter values investigated first solution microscopic model shown fig diagram colored curves represent trajectories train traveling upward direction stopping every station around peak time period train congestion occurs namely trains stop occasionally stations order maintain safety interval congestion caused heavy passenger demand therefore situation rush hour reproduced result given macroscopic model shown fig cumulative plots fig shows cumulative curves trains blue curve represents inflow red curve represents outflow fig shows passengers manner congestion delay observed around peak period remarkable passenger traffic example peak time period less time means throughput transit system reduced heavy passenger demand consequently greater peak hours periods meaning delays occur due congestion macroscopic microscopic models compared terms cumulative number figure comparison macroscopic microscopic models baseline scenario trains fig figure solid curves denote macroscopic model dots denote microscopic model clear macroscopic model follows microscopic model fairly precisely example congestion delay peak time period captured well however slight bias macroscopic model gives slightly shorter travel time mainly due unsteady state train bunching generated microscopic model delay caused bunching recovered microscopic model implemented control scheme details see appendix means control bias could reduced sensitivity analysis conditions accuracy macroscopic model regarding dynamic patterns examined worth investigating quantitatively qualitatively clear model valid speed changes sufficiently small discussed section specifically sensitivity peak passenger demand train supply evaluated assigning various values parameters simulation duration set take residual delay scenarios account parameters baseline scenario results summarized fig fig shows relative difference total travel time ttt trains microscopic macroscopic models various peak passenger flows negative values indicate ttt macroscopic model smaller relative difference considered error index macroscopic model fig compares absolute value ttt model note missing values relative error due macroscopic model derive solution given conditions exceeds jam density corresponds gridlock transportation system according results fig accuracy macroscopic model high peak passenger demand low expected results speed demand change slow cases ttt given macroscopic model almost always less microscopic model might due aforementioned inconsistency steady state assumption macroscopic model control microscopic model peak passenger demand increases relative error increases gradually demand low increases suddenly demand exceeds certain value sudden increase relative error regarding passenger demand absolute values ttt figure comparison microscopic macroscopic models different conditions error extraordinary train bunching microscopic model confirmed fig absolute value ttt microscopic model also exhibits sudden increase demand exceeds certain value bunching often occurs cases excessive passenger demand demand considered unrealistically excessive dwell time train station longer cruising time adjacent stations situations usually occur even rush hours sensitivity train supply weak tendency faster variations supply cause larger errors also expected result results conclude proposed model fairly accurate ordinary passenger demand although able reproduce extraordinary unrealistic situations daily travel excessive train bunching might acceptable representing transit systems usual rush hours conclusion paper following three models urban rail transit system analyzed microscopic model model describing trajectories individual trains passengers based newell model passenger boarding model represented eqs solved using simulations fundamental diagram exact relationship among microscopic model steady state represented eqs equation macroscopic model model describing train passenger traffic using model whose function represented eqs solved using simple simulations macroscopic model original contributions study whereas microscopic model proposed wada microscopic model considered approximation actual transit system represents exact relation among steady state traffic variables microscopic model macroscopic model considered macroscopic approximation behavior microscopic model implies several insights transit system addition according results numerical experiment macroscopic model reproduce behavior microscopic model accurately except cases unrealistically excessive demands simplicity mathematical tractability good approximation accuracy proposed macroscopic model ordinary situations expect contribute obtaining general policy implications management strategies rail transit systems pricing control morning commute problems improvements proposed model considerable first model ignores could solved using nonlinear passenger boarding model instead introducing disutility crowding term departure time choice problem macroscopic model tian palma second variability reliability transit system robustness unpredictable disturbances travel time reliability issues considered proposed model stochastic extension model might useful problem third extension cases heterogeneity spatially heterogeneous station distributions passenger demand would make model considerably realistic application proposed model following morning commute problems investigated authors user equilibrium departure time choice problem find equilibrium given desired arrival time passengers optimal demand control problem find total travel cost minimized given optimal demand supply control problem find total travel cost minimized given solutions problems would provide general insights demand supply management strategies transit systems dynamic pricing operation planning furthermore multimodal commuting problems combined travel mode choices trains modeled proposed cars buses modeled mfds also considerable acknowledgements part research financially supported kakenhi scientific research derivation appendix describes derivation expressed eqs consider looped rail transit system steady state operation let length railroad number stations number trains headway time operation dwelling time train station cruising time train adjacent stations passenger demand flow rate per station note distance adjacent stations number passengers boarding train station headway time operation derived follows round trip time train looped railroad trains pass station time identities hold moreover definition headway newell rule headway time must satisfy reduces relation regime derived follows definition transformed critical state derived follows substituting using identity obtain minimum zero namely relation congested regime derived follows first relation congested regime easily derived relation identity consider identical derived constant negative therefore relation linear congested regime recalling linear curve passes point slope relation congested regime derived eqs constructed based eqs adaptive control scheme microscopic model appendix briefly explains adaptive control scheme preventing train bunching proposed wada scheme consists two control measures holding station increasing maximum speed cruising first scheme modifies buffer time dwelling originally defined train station max represents delay represents time train arrives station represents scheduled time without delay train arrive station weighting parameter scheme represents typical holding control strategy similar bunching prevention method daganzo extends dwelling time vehicle headway preceding vehicle small vice versa second scheme modifies cruising speed interstation travel time reduced min max means event delay train tries catch increasing cruising speed maximum allowable speed vmax implies speed buffered maximum speed meanwhile proposed train operation model study operation therefore study scheduled headway scheme approximated planned frequency thus set substitute stationary state operational dynamics original scheme identical steady state defined section case scheme makes train operation asymptotically stable meaning operation schedule robust small disturbances case scheme prevents propagation amplification delay recover original schedule small shift found fig due note control measures interrupt passenger boarding violate safety clearance trains meaning fundamental assumptions proposed satisfied references alonso munoz ibeas moura congested dwell time dependent transit corridor assignment model journal advanced transportation carey stochastic approximation effects headways delays trains transportation research part methodological carey mccartney model used dynamic traffic assignment computers operations research cats west eliasson dynamic stochastic model evaluating congestion crowding effects transit systems transportation research part methodological chiabaut evaluation multimodal urban arterial passenger macroscopic fundamental diagram transportation research part methodological cuniasse buisson rodriguez teboul almeida analyzing railroad congestion dense urban network use road traffic network fundamental diagram concept public transport daganzo fundamentals transportation traffic operations pergamon oxford daganzo urban gridlock macroscopic modeling mitigation approaches transportation research part methodological daganzo approach eliminate bus bunching systematic analysis comparisons transportation research part methodological cea transit assignment congested public transport systems equilibrium model transportation science palma kilani proost discomfort mass transit implication scheduling pricing transportation research part methodological palma lindsey monchambert economics crowding public transport working paper edie discussion traffic stream measurements definitions almond editor proceedings international symposium theory traffic flow pages fosgerau congestion bathtub economics transportation geroliminis daganzo macroscopic modeling traffic cities transportation research board annual meeting geroliminis levinson cordon pricing consistent physics overcrowding lam wong editors transportation traffic theory pages springer geroliminis haddad ramezani optimal perimeter control two urban regions macroscopic fundamental diagrams model predictive approach ieee transactions intelligent transportation systems geroliminis zheng ampountolas macroscopic fundamental diagram mixed urban networks transportation research part emerging technologies gonzales daganzo morning commute competing modes distributed demand user equilibrium system optimum pricing transportation research part methodological harris train boarding alighting rates high passenger loads journal advanced transportation higgins kozan modeling train delays urban networks transportation science hoogendoorn daamen pedestrian behavior bottlenecks transportation science huisman kroon lentink vromans operations research passenger railway transportation statistica neerlandica iryo properties dynamic user equilibrium solution existence uniqueness stability robust solution methodology transportmetrica transport dynamics iwakura takahashi morichi multi agent simulation model estimating train delays urban rail operation transport policy studies review volume institute transport policy studies japanese kariyazaki investigation train delay recovery mechanism delay prevention schemes urban railway phd thesis national graduate institute policy studies japanese kariyazaki hibino morichi simulation analysis train operation recover delay intervals case studies transport policy kato kaneko soyama choices urban rail passengers facing unreliable service evidence tokyo proceedings international conference advanced systems public transport kim hong kim crowding affect path choice metro passengers transportation research part policy practice koutsopoulos wang simulation urban rail operations application framework transportation research record journal transportation research board kraus yoshida commuter decision optimal pricing service urban mass transit journal urban economics lam cheung poon study train dwelling time hong kong mass transit railway system journal advanced transportation dessouky yang gao joint optimal train regulation passenger flow control strategy metro lines transportation research part methodological lighthill whitham kinematic waves theory traffic flow long crowded roads proceedings royal society london series mathematical physical sciences merchant nemhauser model algorithm dynamic traffic assignment problems transportation science newell simplified theory lower order model transportation research part methodological newell potts maintaining bus schedule proceedings australian road research board volume parbo nielsen prato passenger perspectives railway timetabling literature review transport reviews richards shock waves highway operations research szeto dynamic traffic assignment properties extensions transportmetrica tabuchi bottleneck congestion modal split journal urban economics tian huang yang equilibrium properties morning commuting mass transit system transportation research part methodological tirachini hensher rose crowding public transport systems effects users operation implications estimation demand transportation research part policy practice trozzi gentile bell kaparias dynamic user equilibrium public transport networks passenger congestion hyperpaths transportation research part methodological vuchic urban transit operations planning economics john wiley sons wada kil akamatsu osawa control strategy prevent delay propagation railway systems journal japan society civil engineers ser infrastructure planning management japanese extended abstract english presented european symposium quantitative methods transportation systems available https | 3 |
jul waring problem unipotent algebraic groups michael larsen dong quan ngoc nguyen abstract paper formulate analogue waring problem algebraic group field level consider morphism varieties ask whether every element product bounded number elements give affirmative answer unipotent characteristic zero field formally real idea integral level except one must work schemes question whether every element finite index subgroup written product bounded number elements prove case unipotent ring integers totally imaginary number field introduction original version waring problem asks whether every positive integer exists every integer form anm minimum value since hilbert proved bound exists enormous literature developed largely devoted determining also substantial literature devoted variants waring problem kamke proved generalization theorem nth powers replaced general polynomials series papers wooley solved waring problem polynomials siegel treated case rings integers number fields since many papers analyzed waring problem wide variety rings instance also flurry recent activity waring problem groups typical problem prove every element product small number nth powers elements see instance lst agks references therein paper explores view algebraic groups natural setting waring problem extent resembles work waring problem groups lie type work variants waring problem also fit naturally framework consider morphisms varieties resp schemes defined field resp number ring look bounded generation groups generated images partially supported nsf grant michael larsen dong quan ngoc nguyen strategy developed unipotent algebraic groups fields characteristic formally real justification concentrating unipotent case given lemma following remarks solve unipotent version waring problem totally imaginary number rings work general characteristic fields general number rings consider easier waring problem one allowed use inverses methods throughout elementary input analytic number theory siegel solution waring problem number rings unfortunately original situation waring problem namely ring additive group morphism given results fall short hilbert theorem prove easier waring problem case rather statement every positive integer represented bounded sum nth powers difficulty course ordering seems natural ask whether unipotent groups general number rings one characterize set ought expressible bounded product images proving easier waring problem simply avoid issue generating subvarieties throughout paper always field characteristic algebraic group variety reduced separated scheme finite type particular need connected subvariety closed subgroup always understood defined definition let algebraic group field subvariety generating exists every generic point lies image product map finite collectionsfi morphisms generating union zariski closures generating following necessary sufficient condition subvariety generating proposition let algebraic group closed subvariety generating satisfies following two properties contained proper closed subgroup every proper closed normal subgroup image positive dimension first prove following technical lemma lemma let algebraically closed let irreducible closed subvarieties assume dim dim waring problem unipotent algebraic groups dim dim exists closed subgroup following statements true proof irreducible irreducible generic point closure image therefore closure image thus irreducible closed subvarieties dimension dim dim thus follows defining see depend choice moreover every written thus follows subgroup since algebraically closed implies closed subgroup implies connected contains xyx dimension dim dim thus xyx hxyxh connected dimension dim follows double coset hxyxh consists single left coset xyx also normalizes follows normalizes finally hyh using prove proposition proof clearly true image finite true proves necessity conditions sufficiency may assume without loss generality algebraically closed dim bounded sequence integers therefore stabilizes let denote irreducible component dimension dim irreducible component dim dim dim dim dim dim conditions lemma satisfied let closed subgroup satisfying translate irreducible connected component means union components whenever applying condition subgroups containing follows every generic point lies michael larsen dong quan ngoc nguyen generated claimed dim dim normal image finite contrary condition normalizer contained since depend choice component contrary condition henceforth assume connected interested generating collections morphisms theorem chevalley barsotti rosenlicht every connected algebraic group closed normal subgroup linear algebraic group abelian variety every map rational curve abelian variety trivial thus unless linear algebraic group impossible collection morphisms generating let denote respectively radical unipotent radical lemma exist generating set morphisms proof suffices prove generating set morphisms connected reductive group thus may assume without loss generality connected reductive radical inclusion map induces isogeny tori suffices prove generating set morphisms torus without loss generality may assume thus may replace quotient isomorphic multiplicative group suffices prove morphism curves level coordinate rings obvious statement every maps element equivalently fact need consider case extension semisimple group unipotent group connected semisimple case perhaps even interesting know least always expect bounded generation since example bounded generation elementary matrices since characteristic unipotent necessarily connected prop derived group likewise unipotent prop therefore connected quotient unipotent prop commutative therefore vector group prop galois cohomology group vanishes iii prop cohomology sequence short exact sequence prop implies identify groups waring problem unipotent algebraic groups distinguish closed vector subgroups level algebraic groups corresponding vector space subspace denote inverse image regarded algebraic group lemma let connected unipotent algebraic group let proper closed subgroup normalizer strictly larger proof use induction dim case dim trivial since implies commutative normalizer every subgroup general unipotent fact lower central series goes implies center positive dimension contained strictly larger otherwise replacing respectively see normal strictly larger normal proposition unipotent group every proper closed subgroup contained normal subgroup codimension contains derived group proof characteristic zero connected proposition asserts contains codimension normal subgroup containing proper subgroup proposition amounts obvious statement every vector group contains normal subgroup codimension general case applying previous lemma replace strictly larger group unless normal operation repeated finitely many times since strictly larger must strictly higher dimension since every closed subgroup unipotent group unipotent therefore connected thus may assume normal unipotent replacing respectively done proposition deduce unipotent groups following simple criterion lemma let unipotent group subvariety resp set morphisms generating proper subspace projection positive dimension resp composition projection note question whether set morphisms generating depends set compositions quotient map also invariant left right translation element michael larsen dong quan ngoc nguyen lemma generating positive integers proof image image therefore finite subgroup infinite group record following lemma needed later lemma let unipotent group derived group derived group proper subspace exists dense open subvariety dense open subvariety form lie proof without loss generality assume algebraically closed characteristic connected connected composition commutator map quotient map property image generates therefore contained follows inverse image complement dense open chevalley theorem projection onto first factor constructible set containing generic point therefore contains open dense fiber point condition linear image satisfied least one defined condition satisfies properties claimed unipotent waring problem nonreal fields definition say field nonreal characteristic zero formally real sum squares main theorem section following theorem unipotent algebraic group nonreal field generating set positive integer proof occupies rest section depends following two propositions proposition theorem holds vector group proposition hypotheses theorem exists integer sequence elements sequence positive integers sequence integers waring problem unipotent algebraic groups defined map morphisms generating assuming propositions hold prove theorem induction dimension commutative proposition applies otherwise apply proposition construct letting denote composition proposition asserts every element represented bounded product elements exists bounded product elements lies defining suffices prove every element bounded product elements generating true theorem follows induction thus need prove propositions prove proposition begin special case proposition characteristic zero field formally real positive integer exists integer every vector sum elements proof integer let xkd thus xid projection map onto last coordinate particular xkd positive integers theorem theorem positive integer exists every element sum dth powers elements proceed induction theorem trivial assume every holds choose large enough element sum powers particular denote limit xid clearly taking unions semiring let xjd xid xjd xij denote projection map onto first coordinates choosing exists chosen ment michael larsen dong quan ngoc nguyen either either way exists element done induction hypothesis may therefore assume implies moreover fails injective argument applies may assume isomorphism semirings therefore isomorphism rings since target ring thus regard ring homomorphism maps idempotent since exists follows factors projection onto ith coordinate thus exists ring endomorphism absurd prove proposition proof let pmj maximum degrees pij let chosen proposition write pij aijk given goal find satisfy system equations aijk proposition choosing suitably choose values yjk independently definition thus rewrite system equations aijk yjk always solvable unless relation among linear forms left hand side system sequence waring problem unipotent algebraic groups aijk true pij words defining constant contrary assumption finally prove proposition proof suppose already constructed let denote composition projection let denote vector space spanned set suppose proper subspaces exist represent different classes follows composition projection therefore generating set morphisms thus may assume proper subspace apply lemma deduce existence proper closed subspace commutator let denote composition quotient map generating exists finitely many values without loss generality assume proposition exists bounded product fsm write proposition exist zero aki thus means goes constant coset without loss generality may assume let element realized product values elements fbm michael larsen dong quan ngoc nguyen belongs proposition exists choose either fsm either way commutator fsm lies finitely many least one mod induction codimension proposition follows unipotent waring problem totally imaginary number rings section denotes totally imaginary number field ring integers closed group scheme unitriangular matrices thus generic fiber closed subgroup therefore unipotent moreover filtration normal subgroups successive quotients finitely generated free abelian groups particular definition finitely generated nilpotent group whose hirsch number sum ranks successive quotients set said generating main theorem section following integral version theorem theorem generating set positive integer subgroup finite index begin proving results allow establish power subset group gives finite index subgroup lemma let group finite index subgroup subset positive integer exists finite index subgroup proof without loss generality assume normal consider finite set element choose pair representing choose greater values appearing pairs let multiple greater positive integers union cosets depend image therefore subset finite group closed multiplication therefore subgroup lemma follows waring problem unipotent algebraic groups lemma let finitely generated nilpotent group normal subgroup every finite index subgroup contains finite index subgroup normal proof prove exists function depending normal subgroup every subgroup index contains normal subgroup index replacing kernel left action may assume without loss generality normal prove claim induction total number prime factors prime suffices prove upper bound independent number normal subgroups index true intersecting fixed central series gives central series every index normal subgroup inverse image index subgroup finitely generated abelian group prime factors prime factor normal subgroup index normal subgroup index induction hypothesis contains normal subgroup index index divides applying induction hypothesis deduce existence normal subgroup index proposition let finitely generated nilpotent group normal subgroup subset positive integers contains finite index subgroup image contains finite index subgroup exists finite index subgroup proof let denote subgroup generated intersection finite index image finite index finite index subgroup finitely generated nilpotent group also finitely generated nilpotent replacing respectively assume without loss generality generates replacing may assume contains finite index subgroup lemma may assume normal subgroup let denote image finite index subgroup inverse image subgroup respectively proposition holds replacing reduce case finite need show contains finite index subgroup finite index subgroup replacing may assume meets every fiber particular contains element michael larsen dong quan ngoc nguyen replacing may assume contains identity positive integer let denote maximum fibers cardinality intersection fiber thus intersection every fiber least since fiber size bounded sequence must eventually stabilize replacing suitable power thus closed multiplication meets every fiber number points implies implies thus subgroup bounded index next prove criterion subgroup finite index proposition let hirsch number satisfies dim equality holds finite index proof hirsch number additive short exact sequences let let central series decreasing filtration quotient free abelian subgroup dim every free abelian subgroup rank equality commensurable implies applying argument get dim equality holds subgroup commensurable implies index prove theorem showing contains subset group hirsch number dim first treat commutative case proposition theorem holds commutative proof first claim exist integers lod xdm waring problem unipotent algebraic groups since finite index replacing larger integer also denoted guarantee every element group generated written sum elements prove claim use proposition show basis vector sum elements replacing representation sufficiently divisible positive integer follows written sum elements suitable positive integers see difference see theorem thus every element subring generated ith powers elements theorem siegel see theorem implies exist every element sum ith powers elements thus every element sum ith powers elements therefore every element oei sum elements letting denote positive integer divisible replacing write every element lod sum elements restricting generic fiber write vector polynomials pij given solve system equations whenever solve yjk system always solvable solvable whenever sufficiently divisible thus exists integer divisible sufficiently large sum terms belongs let dom finite filtration whose quotients finitely generated free abelian groups must contain subgroup finite index defining follows contains every represented element therefore sequence stabilizes subgroup rank finite index prove theorem proof first observe proposition remains true precisely assuming morphisms defined elements taken morphisms defined michael larsen dong quan ngoc nguyen instead using proposition use proposition image element guaranteed lemma may lie lattice positive integer multiple property respect unchanged replaced power elements guaranteed proposition may lie clear denominators multiplying suitable positive integer element exist long guaranteed replacing suitable positive integral multiple induction dim proceed proof theorem usings induction hypothesis exists contains subgroup hirsch number dim hand proposition exists bounded power contains subgroup hirsch number dim denotes composition quotient map theorem follows proposition additivity hirsch numbers easier unipotent waring problem recall classical easier waring problem prove every positive integer exists every integer written form anm determine minimum value section prove unipotent analogues easier waring problem arbitrary fields characteristic zero rings integers arbitrary number fields theorem unipotent algebraic group field characteristic zero generating set positive integer theorem let number field ring integers closed group scheme unitriangular matrices generating set positive integer subgroup bounded index proof theorem depends variants propositions waring problem unipotent algebraic groups proposition field characteristic zero positive integer exists integer represented proof theorem proposition theorem holds vector group proof let pmj maximum degrees pij let chosen proposition writing pij aijk given goal find suitable light proposition one let let thus system equivalent system equations aijk proposition choosing suitably choose values yjk independently definition thus rewrite system equations aijk yjk arguing proof proposition see system equations always solvable unless constant modulo proper subspace constant michael larsen dong quan ngoc nguyen canonical projection impossible since set morphisms generating proposition hypotheses theorem exists integer sequence elements sequence positive integers sequence integers sequence integers sequences elements defined map morphisms generating proof using proposition arguments proposition proposition follows immediately proof theorem proof theorem theorem using propositions proceed proof theorem using induction dim theorem follows immediately next prove integral variant proposition greater generality need theorem proposition let integral domain whose quotient field characteristic zero positive integers exist proof integer set choose proposition basis vector written form yjd replacing representation follows exists waring problem unipotent algebraic groups apply prove oem let replacing deduce next result variant proposition proposition theorem holds commutative proof let chosen proposition restricting generic fiber write vector polynomials pij given solve system equations whenever solve system yjk system always solvable solvable whenever sufficiently divisible thus exists integer divisible sufficiently large sum terms belongs let dom set define copies using arguments proof proposition subgroup finite index therefore sequence stabilizes subgroup rank finite index prove theorem proof theorem first observe proposition remains true precisely assuming morphisms defined elements taken morphisms defined proceed proof theorem using induction dim induction hypothesis exists integer subgroup hirsch number dims hand proposition exists bounded power subgroup hirsch number dim denotes composition quotient map michael larsen dong quan ngoc nguyen theorem follows proposition additivity hirsch numbers references agks avni nir gelander tsachik kassabov martin shalev aner word values adelic groups bull lond math soc birch waring problem number fields acta arith car mireille waring pour les corps fonctions luminy carter david keller gordon bounded elementary generation sln amer math carter david keller gordon elementary expressions unimodular matrices comm algebra chinburg ted infinite easier waring constants commutative rings topology appl demazure michel gabriel pierre groupes tome groupes commutatifs avec appendice corps classes local par michiel hazewinkel masson cie paris publishing amsterdam ellison william waring problem fields acta arith gallardo luis vaserstein leonid strict waring problem polynomial rings number theory grunewald fritz schwermer joachim free nonabelian quotients orders imaginary quadratic numberfields algebra guralnick robert tiep pham huu effective results waring problem finite simple groups amer math larsen michael waring problem rational functions one variable preprint kamke verallgemeinerungen des satzes math ann lst larsen michael shalev aner tiep pham huu waring problem finite simple groups annals math liu wooley trevor waring problem function fields reine angew math rosenlicht maxwell basic theorems algebraic groups amer math serre cohomologie galoisienne contribution verdier lecture notes mathematics springerverlag york shalev aner word maps conjugacy classes noncommutative waringtype theorem annals math siegel carl ludwig generalization waring problem algebraic number fields amer math siegel carl ludwig sums powers algebraic integers ann math tavgen bounded generation normal twisted chevalley groups rings proceedings international conference algebra part novosibirsk contemp part amer math providence waring problem unipotent algebraic groups voloch felipe waring problem acta arith wooley trevor simultaneous additive equations proc london math soc wooley trevor simultaneous additive equations reine angew math wooley trevor simultaneous additive equations iii mathematika wright edward maitland easier waring problem london math soc department mathematics indiana university bloomington indiana usa address mjlarsen department applied computational mathematics statistics university notre dame notre dame indiana usa address | 4 |
nov fair till piotr mervin kai institut softwaretechnik und theoretische informatik berlin germany wilker abstract study following multiagent variant knapsack problem given set items set voters value budget item endowed cost voter assigns item certain value goal select subset items total cost exceeding budget way consistent voters preferences since preferences voters items vary significantly need way aggregating preferences order select socially preferred valid knapsack study three approaches aggregating voters preferences motivated literature multiwinner elections fair allocation way introduce concepts individually best diverse fair knapsack study computational complexity including parameterized complexity complexity restricted domains computing aforementioned concepts multiagent knapsacks introduction classic knapsack problem given set items cost value budget goal find subset items maximal sum values subject constraint total cost selected items must exceed budget paper studying following variant knapsack problem instead single objective value item assume set agents also referred voters potentially different valuations items choosing subset items want take account possibly conflicting preferences voters respect items selected paper discuss three different approaches voters valuations aggregated multiagent knapsack forms abstract model number scenarios first observe natural generalization model multiwinner elections research initiated within student project research teams organized research group algorithmics computational complexity berlin berlin germany supported dfg project damm case items come different costs literature multiwinner elections items often called candidates multiwinner voting rules applicable broad class scenarios ranging selecting representative committee experts recommendation systems resource allocation facility location problems settings quite natural consider different incur different costs algorithms multiagent knapsack viewed tools participatory budgeting problem authorities aggregate citizens preferences order decide potential local projects obtain funding perhaps straightforward way aggregate voters preferences select subset knapsack maximizes sum utilities voters selected items call selecting individually best knapsack subject differences methods used elicitating voters preferences taken benabbou perny context participatory budgeting goel benade however selecting individually best knapsack discriminate even large minorities voters illustrated following simple example assume set items divided two subsets items unit cost voters like items assigning utility utility items remaining voters like items individually best knapsack would contain items voters would effectively disregarded paper introduce two approaches aggregating voters preferences selecting collective knapsack one call selecting diverse knapsack inspired rule literature multiwinner voting informally speaking approach aim maximizing number voters least one preferred item selected knapsack second main focus paper call selecting fair knapsack use concept nash welfare literature fair allocation nash welfare solution concept implements tradeoff objectively efficient resource allocation knapsack case allocation acceptable large population agents indeed properties nash welfare recently extensively studied literature fair allocation solution concept considered context public decision making online resource allocation transmission congestion control referred proportional fairness thus work introduces new application goal select set shared concept nash welfare particular side note explain approach leads new class multiwinner rules viewed generalizations proportional approval voting rule beyond approval setting apart introducing new class multiagent knapsack problems contribution following example often described literature enterprise considers set products pushed natural view problem instance multiwinner elections products corresponding potential customers corresponding voters table overview results herein abbreviate preferences respectively voters refers parameterized number voters unary general knapsack diverse knapsack fair knapsack voters thm fpt thm prop thm thm thm thm study complexity computing optimal individually best diverse fair knapsack problem general hard except case individually best knapsack utilities voters represented unary encoding study parameterized complexity problem focusing number voters considering parameter relevant case set voters fact relatively small group experts acting behalf larger population agents redelegating task evaluating items committee experts reasonable several reasons instance coming accurate valuations items may require specialized knowledge significant cognitive effort would often impossible evaluate items efficiently accurately among large group common people show utilities voters computing diverse knapsack fpt parameterized number voters hand problem computing fair knapsack parameter study complexity considered problems singlecrossing preferences show unary encoding voters utilities diverse knapsack computed efficiently preferences interestingly computing fair knapsack stays even preferences results summarized table additionally show three problems case theorems prove intractability parameter budget proposition corollary model pair natural numbers denote set denote set let set voters set items voters preferences items represented utility profile use denote utility assigns utility quantifies extent enjoys assume utilities nonnegative integers item comes cost given global budget call knapsack subset items whose total cost exceed goal select knapsack would sense preferred voters describe three representative rules extend preferences individual voters individual items aggregated preferences knapsacks rule induces corresponding method selecting best knapsack rules motivated concepts literature fair division multiwinner elections individually best knapsack knapsack maximizes total utility voters selected items uib defines perhaps straightforward way select knapsack call individually best formula uib treats items separately take account fairnessrelated issues indeed knapsack unfair illustrated following example example let integer consider set voters items unit cost let rename items consider following utility profile otherwise large case individually best knapsack sib consists items liked single voter time exists much fair knapsack sfair voter contains item liked diverse knapsack knapsack maximizes utility udiv defined udiv words definition udiv assume voter cares preferred item knapsack approach inspired rule literature multiwinner elections classic models literature facility location call knapsack diverse following convention multiwinner literature intuitively knapsack represents diversity opinions among population voters particular preferences voters diverse knapsack tries incorporate preferences many groups voters possible cost containing one representative item similar group fair knapsack use nash welfare solution concept formally call knapsack fair maximizes product ufair alternatively logarithm ufair represent fair knapsack one pby taking thep maximizing log section referred reader literature supporting use nash welfare various settings let complement arguments one additional observation utilities voters come binary set costs items equal one multiagent knapsack framework boils standard multiwinner elections model approval preferences case appealing rule proportional approval voting expressed finding knapsack maximizing harmonic number almost equivalent finding fair knapsack maximizing nash welfare since harmonic function viewed discrete version logarithm thus fair knapsack considered generalization pav model cardinal utilities costs particular side note observe notion fair knapsack combined positional scoring rules induces rules viewed adaptations pav ordinal model related work work extends literature knapsack problem variant classic knapsack problem multiple independent functions valuating items typically knapsack problem goal find set pareto optimal solution according multiple objectives defined given functions valuating items approach different since consider specific forms aggregating objectives particular concepts individually best diverse fair pareto optimal solution overview literature knapsack problem focus analysis heuristic algorithms refer reader survey lust teghem multidimensional knapsack yet another generalization original knapsack problem knapsack multiple cost constraints item comes different costs different constraints goal maximize single objective respecting constraints approximation algorithms problem submodular objective functions considered kulik sviridenko lee puchinger provide overview heuristic algorithms problem finally florios consider algorithms multidimensional variant knapsack problem boutilier studied variant rule includes knapsack constraints similar diverse knapsack problem typically nash welfare would defined definition add one sum order avoid pathological situations sum equal zero voters also allows represent expression optimize sum logarithms thus expose close relation fair knapsack proportional approval voting rule difference consider utilities extracted voters preference rankings thus utilities specific structure model items shared instead selected items copied distributed among voters boutilier consider model additional costs related copying selected item sending voter consequently general model complex diverse knapsack also considered specific variant model equivalent winner determination rule computational complexity winner determination rule variant diverse knapsack costs items equal one extensively studied computational social choice comsoc literature procaccia showed problem parameterized complexity problem investigated betzler computational complexity restricted domains betzler elkind lackner skowron peters lackner boutilier skowron investigated approximation algorithms problem superpolynomial fpt approximation algorithms considered skowron faliszewski variant diverse knapsack problem utilities satisfying form triangle inequality known name knapsack median problem see work byrka discussion approximability problem method multiwinner election rule short multiagent knapsack model extends multiwinner model allowing items different costs broad class multiwinner rules aggregating voter preferences various ways particular exists number spectra rules individually best objectives overview multiwinner rules adapted setting see discussed introduction multiagent variant knapsack problem often considered context participatory budgeting yet best knowledge literature focused simplest aggregation rule corresponding individually best knapsack approach another avenue explored fain studied rules determine level funding provided different projects items nomenclature rather rules selecting subsets projects predefined funding requirements mentioned nash welfare established solution concept used literature fair allocation nguyen provided thorough survey complexity computing nash welfare context allocating indivisible goods multiagent setting best knowledge paper first work studying fairness solution concepts problem selecting collective knapsack computing collective knapsacks section investigate computational complexity finding individually best diverse fair knapsack formally define computational problem individually best knapsack individually best knapsack input instance budget task compute knapsack maximum uib define computational problems diverse knapsack fair knapsack difference expression maximize two problems udiv ufair respectively use names referring decision variants problems cases assume one additional integer given input decision question whether exists value uib respectively udiv ufair greater equal observe functions uib udiv ufair represented sum logarithms submodular thus use algorithm sviridenko following guarantees theorem exists algorithm individually best knapsack diverse knapsack fair knapsack remaining part paper focus computing exact solution three problems particular study complexity following two restricted domains preferences let topi denote preferred item let order items say utility profile respect topi topi preferences let order voters say utility profile respect two items set forms consecutive block according say profile exists order items voters respect note order witnessing computed polynomial time see also study parameterized complexity three problems given parameter say algorithm fpt respect solves instance problem poly time computable function parameterized complexity theory fpt algorithms considered efficient whole hierarchy complexity classes informally speaking problem assumed fpt hence hard parameterized point view see details parameterized complexity individually best knapsack first look simplest case individually best knapsack theorem individually best knapsack solvable polynomial time utilities voters proof consider instance let apply dynamic programming table denotes minimal cost value uib least equal initialize min max precomputing get running time note utilities encoded unary problem even one voter see theorem diverse knapsack turn attention problem computing diverse knapsack straightforward reduction standard knapsack problem get problem computationally hard even profiles unless utilities provided unary encoding theorem diverse knapsack even utility profiles proof present reduction knapsack let instance knapsack comes value weight question whether exists set set items add voters immediate max proves correctness immediate check utility profile note computing diverse knapsack also unary encoding generalizes rule computationally hard singlepeaked profiles rule computable polynomial time known algorithms extended considering dynamic programs induction running dimensions case diverse knapsack theorem diverse knapsack solvable polynomial time utility profile encoded unary proof consider input instance enumerated order note ordering computed polynomial time let apply dynamic programming table denotes minimal cost subset containing value least equal udiv define helper function otherwise initialize set min max let derive value best diverse knapsack max inductively argue clearly best diverse knapsack item set cost consider let set items value least minimal cost containing either first case consider second case clearly let let clearly hence value greater value max provide analogous result let define set useful tools also use tools later analyzing parameterized complexity problem subset items define given tuple voters assignment surjection assignment called connected every holds first tool introduce following auxiliary problem ordered diverse knapsack ordered budget input instance task compute knapsack uord maxconnected maximum solution diverse knapsack let arg consider ordering voters arbitrarily ordered difficult see assignment arg connected hence obtain following connection diverse knapsack ordered diverse knapsack voters observation ordering forms solution ordered diverse knapsack diverse knapsack next give dynamic program computing knapsacks qualitatively lie optimal knapsacks ordered diverse knapsack diverse knapsack specify mean lying later voters let input ordering pfix set give dynamic program table denotes cost knapsack value assigned voters least equal set min otherwise define helper function otherwise set min min max observation utilities compute entries polynomial time lemma let solution diverse knapsack let udiv proof suppose case construct knapsack follows let item minimizes make contradicting fact otherwise max proceed towards contradiction let item minimizes make continue reasoning next give relation ordered diverse knapsack lemma let solution ordered diverse knapsack ordered let uord proof assume enumerated let connected assignment let definition moreover let moreover follows inductively ingredients hand prove main results proposition diverse knapsack solvable polynomial time utility profiles encoded unary ordering voters forms proof solution ordered diverse knapsack diverse knapsack lemmas guaranteed algorithm find use tools obtain fpt algorithm respect number voters unrestricted domains theorem diverse knapsack fpt parameterized number voters utilities voters proof observation know ordering forms solution ordered diverse knapsack dynamic diverse knapsack together lemmas obtain program find hence ordering voters compute take minimum observed values note largest value ordering voters altogether yields running time poly log poly finally complement theorem proving lower bound running time assuming eth proposition diverse knapsack binary utilities unary costs parameterized budget unless eth breaks poly algorithm proof give reduction dominating set instance dominating set consists graph integer question whether exists subset vertices vertex denotes closed neighborhood vertex introduce voter item cost one set uvw uvw otherwise furthermore set budget difficult see diverse knapsack udiv lower bounds follow fair knapsack let turn problem computing fair knapsack first prove problem even restricted cases study parameterized complexity theorem fair knapsack even one voter two voters costs equal one utilities costs equal one proof provide reduction partition problem given set nppositive integers question decide whether exists subset given instance partition integers divisible two construct instance fair knapsack follows let introduce item cost introduce one voter utility set budget ask exists knapsack nash welfare least let let solution subset items forms apfair knapsack nash welfare least conversely let constructed instance fair knapsack let fair knapsack denote subset integers corresponding moreover items holds together inequalities yield hence forms solution provide reduction exact partition problem given set positive integers decide whether pand integer subset given instance exact partition integers divisible two byp construct instance fair knapsack follows similarly set introduce item cost introduce two voters utility functions respectively set budget ask knapsack nash welfare least equal let let solution claim subset ofpitems forms appropriate fair knapsack holds nash welfare least conversely let constructed instance fair knapsack let corresponding fair knapsack let denote subset integers corresponding items holds moreover item holds hence product maximal together follows leading hence forms solution provide reduction exact regular set packing ersp problem parameterized reduction exact regular independent set given set set subsets integer decide whether exists subset distinct holds let instance ersp construct instance fair knapsack follows let set items cost equal one introduce voters otherwise set desired nash welfare finishes construction assume admits solution claim fair knapsack desired value nash welfare note construction item contributes one exactly voters moreover distinct contribute disjoint sets voters hence conversely let fair knapsack let claim forms solution first observe let set elements covered note second inequality observe function increasing interval every hence thus set exactly pairwise disjoint sets given exact regular set packing ersp respect size solution proof theorem implies following corollary fair knapsack parameterized budget even utilities costs equal one using clever construction show combination two number voters still get intractability theorem fair knapsack parameterized number voters budget even utilities budget represented unary encoding costs items equal one proof provide parameterized reduction clique problem known respect number colors let instance clique given graph set vertices set edges natural number coloring function assigns one colors vertex ask contains pairwise connected vertices different color without loss generality assume construct instance fair knapsack follows refer figure illustration let set set items associate one item vertex edge construct set voters follows unless specified otherwise default assume voter assigns utility zero item color introduce one voter assigns utility vertex color clearly voters pair two different colors introduce voters assigning utility edge connects two vertices two colors voters figure illustration instance obtained proof theorem herein ncb denotes vertex color class color class contains vertices presented example vertices adjacent blocks containing zero indicate corresponding entries zero ordered pair colors introduce two vertices call following utilities consider set vertices color rename arbitrary way put sequence voter assigns utility vertex utility edge connects vertex color voter assigns utility utility edge connects vertex color voters set cost item one total budget simple calculation one check total number voters equal completes construction first observe total item assigned utility voters indeed item corresponding vertex gets utility exactly one voter first group total utility voters third group similarly item corresponding edge gets utility voters second group total utility four voters third group thus independently select items sum utilities assigned voters always bkt thus clearly nash welfare would maximized total utility assigned selected items voter equal case nash welfare would equal show however voter assigns set items utility items vertices different colors remaining items edges selected edge connects two selected vertices indeed easy see selected set items structure described voter assigns set utility prove implication assume set items voter assigns total utility looking first group voters infer items correspond vertices vertices different colors looking second group voters infer pair two different colors contains exactly one edge connecting vertices colors finally looking third group voters infer edge connects colors adjacent vertices colors completes proof hand instance fair knapsack utilities represented unary encoding solvable time parameterized number voters computable function depending theorem utilities represented unary encoding fair knapsack parameterized number voters proof provide algorithm based dynamic programing construct table sequence integers entry represents lowest possible value budget exists knapsack following properties total cost items knapsack equal last index item knapsack maxaj iii voter table constructed recursively min handle corner cases setting whenever clearly fixed utilities represented unary encoding table filled polynomial time sufficient traverse table find entry maximizes positive side stronger requirements voters utilities number different values utility functions small strengthen theorem prove membership fpt theorem fair knapsack fpt parameterized combination number voters number different values utility function take proof use classic result lenstra says integer linear program ilp solved fpt time respect number integer variables also use recent result bredereck proved one apply transformations certain variables ilp modified program still solved fpt time construct ilp follows let set values utility function take vector define set items voter intuitively describes subcollection items type items indistinguishable look utilities assigned voters may vary costs set introduce integer variable intuitively denotes number items optimal solution belong construct function cost cheapest items clearly convex formulate following program maximize log subject program uses concave transformations logarithms maximized expression convex transformations functions sides constraints use result bredereck claim program solved fpt time respect number integer variables completes proof fair knapsack restricted domains contrast individually best knapsack diverse knapsack solvable polynomial time restricted domains computing fair knapsack remains utility profiles even theorem fair knapsack even domains costs items equal one utilities voter come set proof give reduction problem given universe elements set subsets question decide whether exist exactly subsets cover without loss generality additionally assume element appears exactly three sets given instance note figure visualization utilities voters used proof theorem solid lines interpreted plots depicting utilities voters different items instance agent assigns utility items utility items note agents depicted figure correspond element compute instance problem computing fair knapsack follows utilities voters depicted figure first introduce two items correspond set cost one introduce three different types voters add two voters add two voters add two voters set budget required nash welfare apparent order profile single crossingness note utilities agents increasing decreasing hence order voters witnesses prove constructed instance fair knapsack let fbk exact cover claim abi fair knapsack first observe consider welfare three typesp voters separately next consider voters type consider abj abj abj symmetry abj finally consider voters type consider voter let index recall exact cover abj abj symmetry fbj hence get total nash welfare equal let fair knapsack nash welfare least equal show total utility voters assign item equal indeed two voters assign total utility similarly pair voters assigns utility finally observe whenever voters assign utility otherwise assign utility since set contains exactly elements get gets total utility voters hence items contribute total utility nash welfare equal total utility must distributed equally possible among voters specifically voters need get total utility voters must get total utility claim suppose case let smallest index either consider first case let holds follows voter case works analogously hence claim follows infer uxi thus voter must case finally prove fbk forms cover towards contradiction suppose element covered consider voter observe since fbj abj thus reached contradiction consequently get every element covered completes proof discussed section voters utilities come binary set costs items equal one problem computing fair knapsack equivalent computing winners according proportional approval voting case preferences peters showed problem formulated integer linear program total unimodular constraints thus solvable polynomial time makes result interesting shows allowing slightly general utilities coming set instead problem becomes already even additionally assume preferences draws quite accurate line separating instances computationally easy intractable conclusion paper study three variants knapsack problem multiagent settings one variants selecting individually best knapsack considered literature work introduces two concepts diverse fair knapsack paper establishes relation knapsack problem broad literature including literature multiwinner voting fair allocation way expose variety ways preferences voters aggregated number scenarios captured abstract model multiagent knapsack computational results outlined table summary results show problem computing diverse knapsack handled efficiently simplifying assumptions hand give multiple evidences computing fair knapsack hard problem thus research provides theoretical foundations motivating calls studying approximation heuristic algorithms problem computing fair knapsack references arrow social choice individual values john wiley sons revised editon ausiello atri protasi structure preserving reductions among convex optimization problems journal computer system sciences benabbou perny solving knapsack problems using incremental approval voting proceedings european conference artificial intelligence pages benade nath procaccia shah preference elicitation participatory budgeting proceedings aaai conference artificial intelligence pages betzler slinko uhlmann computation fully proportional representation journal artificial intelligence research black rationale group journal political economy bredereck faliszewski niedermeier skowron talmon mixed integer programming constraints tractability applications multicovering voting technical report byrka pensyl rybicki spoerhase srinivasan trinh improved approximation algorithm knapsack median using sparsification proceedings annual european symposium algorithms pages cabannes participatory budgeting significant contribution participatory democracy environment urbanization caragiannis kurokawa moulin procaccia shah wang unreasonable fairness maximum nash welfare proceedings acm conference economics computation pages chamberlin courant representative deliberations representative decisions proportional representation borda rule american political science review conitzer freeman shah fair public decision making proceedings acm conference economics computation pages cygan fomin kowalik lokshtanov marx pilipczuk pilipczuk saurabh parameterized algorithms springer darmann schauer maximizing nash product social welfare allocating indivisible goods european journal operational research downey fellows fundamentals parameterized complexity texts computer science springer elkind lackner structure dichotomous preferences proceedings international joint conference artificial intelligence pages elkind lackner peters structured preferences endriss editor trends computational social choice access fain goel munagala core participatory budgeting problem proceedings conference web internet economics pages faliszewski skowron slinko talmon multiwinner voting new challenge social choice theory endriss editor trends computational social choice access faliszewski skowron slinko talmon multiwinner rules paths pages zanjirani farahani hekmatfar editors facility location concepts models case studies springer fellows hermelin rosamond vialette parameterized complexity graph problems theoretical computer science florios mavrotas diakoulaki solving multiobjective multiconstraint knapsack problems using mathematical programming evolutionary algorithms european journal operational research flum grohe parameterized complexity theory freeman zahedi conitzer fair efficient social choice dynamic settings pages multidimensional knapsack problem overview european journal operational research garey johnson computers intractability guide theory freeman company goel krishnaswamy sakshuwong aitamurto knapsack voting voting mechanisms participatory budgeting manuscript frank kelly charging rate control elastic traffic european transactions telecommunications kulik shachnai tamir approximations monotone nonmonotone submodular maximization knapsack constraints mathematics operations research lackner skowron consistent rules technical report april lee mirrokni nagarajan sviridenko submodular maximization matroid knapsack constraints proceedings fortyfirst annual acm symposium theory computing pages lenstra integer programming fixed number variables mathematics operations research boutilier budgeted social choice consensus personalized decision making proceedings international joint conference artificial intelligence pages lust teghem multiobjective multidimensional knapsack problem survey new approach international transactions operational research mirrlees exploration theory optimal income taxation review economic studies monroe fully proportional representation american political science review moulin fair division collective welfare mit press nash bargaining problem econometrica nguyen roos rothe survey approximability inapproximability results social welfare optimization multiagent resource allocation annals mathematics artificial intelligence niedermeier invitation algorithms oxford university press peters total unimodularity efficiently solve voting problems without even trying technical report peters lackner preferences circle proceedings aaai conference artificial intelligence pages procaccia rosenschein zohar complexity achieving proportional representation social choice welfare puchinger raidl pferschy multidimensional knapsack problem structure algorithms informs journal computing ramezani endriss nash social welfare multiagent resource allocation pages springer berlin heidelberg roberts voting income tax schedules journal public economics skowron faliszewski fully proportional representation approval ballots approximating maxcover problem bounded frequencies fpt time proceedings aaai conference artificial intelligence pages skowron faliszewski lang finding collective set items proportional multirepresentation group recommendation proceedings aaai conference artificial intelligence skowron faliszewski slinko achieving fully proportional representation approximability result artificial intelligence skowron faliszewski elkind complexity fully proportional representation electorates theoretical computer science skowron faliszewski lang finding collective set items proportional multirepresentation group recommendation artificial intelligence sviridenko note maximizing submodular set function subject knapsack constraint operations research letters thiele flerfoldsvalg oversigt det kongelige danske videnskabernes selskabs forhandlinger pages chan elkind multiwinner elections preferences tree proceedings international joint conference artificial intelligence pages | 8 |
informal overview triples systems nov louis rowen abstract describe triples systems expounded axiomatic algebraic umbrella theory classical algebra tropical algebra hyperfields fuzzy rings introduction goal overview present axiomatic algebraic theory unifies simplifies explains aspects tropical algebra hyperfields fuzzy rings terms familiar algebraic concepts motivated attempt understand whether coincidental basic algebraic theorems mirrored supertropical algebra spurred realization results obtained parallel research hyperfields fuzzy rings objective hone precise axioms include various examples formulate axiomatic structure describe uses review five papers theory developed bulk survey concerns axiomatic framework laid since papers build treatments found although deal general categorical issues largely hands approach emphasizing negation map exists abovementioned examples often obtained means symmetrization functor although investigation centered semirings grown tropical considerations also could used develop parallel lie theory generally hopf theory acquaintance basic notions one starts set want study called set tangible elements endowed partial additive algebraic structure however defined resolved embedding larger set fuller algebraic structure often multiplicative monoid situation developed lorscheid semiring however also examples lie algebras lacking associative multiplication usually denote typical element typical element definition set additive monoid together scalar multiplication satisfying distributivity sense also stipulating module multiplicative monoid asatisfying extra conditions date november mathematics subject classification primary secondary key words phrases bipotent category congruence dual basis homology hyperfield linear algebra matrix metatangible morphism negation map module polynomial prime projective tensor product semifield semigroup semiring split supertropical algebra superalgebra surpassing relation symmetrization system triple tropical file name monoids subject recent interest generally monoid hopf theory play interesting role examined turns distributivity elements enough run theory since one define multiplication make distributive seen theorem rather easy result applies instance hyperfields phase hyperfield rowen sake exposition assume next introduce formal negation map describe introductory examples generates additively creating formal negation map available outset introduce two ways elaborated shortly declare negation map identity supertropical case apply symmetrization get switch map second kind often applicable could take role thin element called write usually require tangible classical algebra accordingly call triple definition examples classical mathematics might provide general intuition employing study rather trivial example multiplicative subgroup field could graded associative algebra multiplicative submonoid homogeneous elements deeper example tied bases given example interested situation involving semirings rings motivating examples supertropical semiring set tangible elements symmetrized semiring power set hyperfield hyperfield since hyperfields varied provide good test theory semirings general without negation maps broad yield decisive results would like reason negation maps triples introduced first place since need correlate two structures well negation map could viewed unary operator convenient work context universal algebra designed precisely purpose discussing diverse structures together recently generalized lawvere theories operads delve aspects round things given triple introduce surpassing relation replace equality theorems classical mathematics equality quadruple called motivating examples satisfies axioms ring except existence element negatives semiring elaborate main examples motivating theory idempotent semirings tropical geometry assumed prominent position mathematics ability simplify algebraic geometry changing certain invariants often involving intersection numbers varieties thereby simplifying difficult computations outstanding applications abound including main original idea expounded take limit logarithm absolute values coordinates affine variety base logarithm goes underlying algebraic structure reverted algebra rmax ordered multiplicative monoid one defines max clearly additively bipotent sense algebras studied extensively time ago definition semigroup characteristic minimal characteristic characteristic idempotent particular bipotent semirings characteristic geometry studied intensively logarithms taken complex numbers algebraic structure bipotent semirings often without direct interpretation tropical geometry attention tropicalists passed field puisseux series characteristic also algebraically closed field natural valuation thereby making available tools valuation theory collection presents valuation theoretic approach thus one looks alternative algebra informal overview triples systems properties characteristic described major examples characteristic interesting examples characteristic supertropical semirings izhakian overcame many structural deficiencies algebra adjoining extra copy called ghost copy definition modifying addition generally supertropical semiring semiring ghosts together projection satisfying extra properties supertropicality supertropical semiring standard mysteriously although lacking negation supertropical semiring provides affine geometry linear algebra quite parallel classical theory taking negation map identity ghost ideal takes place element every instance classical theorem involving equality replaced assertion ghost called ghost surpassing particular means ghost example irreducible affine variety set points evaluated given set polynomials ghost necessarily leading version nullstellensatz theorem link decomposition affine varieties factorization polynomials illustrated one indeterminate remark theorem version resultant polynomials computed classical sylvester matrix theorem theorem matrix theory also developed along supertropical lines supertropical theorem theorem says characteristic polynomial evaluated matrix ghost matrix called singular permanent tropical replacement determinant ghost theorem row rank column rank submatrix rank matrix sense seen equal solution tropical equations given supertropical singularity also gives rise semigroup versions classical algebraic group illustrated also valuation theory handled series papers starting also generalized note supertropical semirings almost bipotent sense turns important feature triples hyperfields related constructions another algebraic construction hyperfields multiplicative groups sets replace elements one takes sums hyperfields received considerable attention recently part diversity fact viro tropical hyperfield matches izhakian construction important nontropical hyperfields hyperfield signs phase hyperfield triangle hyperfield whose theories also want understand along similar lines hyperfield theory one replace zero property given set contains intriguing phenomenon linear algebra classes hyperfields follows classical lines supertropical case hyperfield signs provides easy counterexamples others discussed symmetrization construction uses gaubert symmetrized algebras designed linear algebra prototype start take define switch map reader might already recognize first step constructing integers natural numbers one identifies trick recognize equivalence relation without modding since everything could degenerate nonclassical applications equality often replaced assertion rowen symmetrized also viewed via twist action utilized define study prime spectrum fuzzy rings dress introduced fuzzy rings ago connection matroids also seen recently related hypergroups negation maps triples systems varied examples theories often mimic classical algebra lead one wonder whether parallels among happenstance whether straightforward axiomatic framework within gathered simplified unfortunately semirings may lack negation also implement formal negation map serve partial replacement negation definition negation map map together semigroup isomorphism order written satisfying obvious examples negation maps identity map might seem trivial fact one used supertropical algebra switch map symmetrized algebra usual negation map classical algebra hypernegation definition hypergroups accordingly say negation map first kind second kind indicated earlier take role customarily assigned zero element supertropical theory ghost elements definition called balanced elements form element determines negation map since several important elements important fuzzy rings need absorb multiplication rather negation definition implies definition collection negation map triple generates structure choice system quadruple triple relation definition main relations defined sets relation important theoretical role replacing enabling define broader category one would obtain directly universal algebra one major reason formally replace equality much theory found transfer principle given context systems theorem example four main examples standard supertropical triple identity map get system taking informal overview triples systems symmetrized triple componentwise addition multiplication given take switch map second kind fuzzy triple appendix module element satisfying define negation map given particular enables view fuzzy rings systems argument shows tracts systems hyperfield original hyperfield power set componentwise operations power set induced hypernegation although introduced since need generate example taking phase hypergroup concerned triples furthermore one take generated triples systems related tropical algebra presented structures monoids also amenable approach formulated axiomatically context universal algebra treated example natural categorical setting established provides context tropicalization becomes functor thereby providing guidance understand tropical versions assortment mathematical structures ground triples versus module triples classical structure theory involves investigation algebraic structure small category example viewing monoid category single object whose morphisms elements homomorphisms functors two small categories hand one obtains classical representation theory via abelian category class modules given ring analogously two aspects triples call triple resp system ground triple resp ground system study small category single object right usually semidomain ground triples flavor lorscheid blueprints albeit slightly general negation map whereas representation theory leads module systems described situation leads fork road first path takes structure theory based functors ground systems translating homomorphic images systems via congruences especially prime systems product congruences nontrivial ground systems often designated terms structure semiring systems hopf systems hyperfield paper different flavor dealing matrices linear algebra ground systems focusing subtleties concerning cramer rule equality row rank column rank submatrix rank second path takes categories module systems also bring tensor products hom functors applied geometry develop homological theory relying work done already grandis name homological category without negation map parallel approach connes consani contents systems emphasis ground systems one apply familiar constructions concepts classical algebra direct sums definition matrices involutions polynomials localization tensor products remark produce new triples systems simple tensors comprise tangible elements tensor product properties tensors hom treated much greater depth localization analyzed module systems rowen basic properties triples systems let turn important properties could hold triples one crucial axiom theory holding tropical situations related theories definition uniquely negated implies definition hyperfield triples uniquely negated hone obtain two principal concepts definition uniquely negated sum two tangible elements tangible unless special case whenever words stipulation definition utmost importance since otherwise main examples would fail proposition shows view triple hypergroup thereby enhancing motivation transferring hyperfield notions triples systems general triple satisfying uniquely negated supertropical triple bipotent modification symmetrized triple described example krasner hyperfield triple supertropicalization boolean semifield triple arising hyperfield signs symmetrization phase hyperfield triple triangle hyperfield triple even metatangible although latter idempotent seen theorem hyperfield triples satisfy another different property independent interest definition definition surpassing relation system called implies category hyperfields given embedded category uniquely negated systems theorem reversibility enables one apply systems matroid theory although yet embarked endeavor earnest height element sometimes called width literature minimal say height definition every element triple finite height height maximal height elements heights bounded example supertropical semiring height symmetrized semiring idempotent semifield unexpected examples systems sneak triple height described examples case presented could major axiom theory ground systems group leading bevy structure theorems systems starting observation lemma either thus proposition following assertions seen equivalent triple containing height iii obtain following results system theorem characteristic first kind theorem every element form extent presentation unique described theorem theorem distributivity follows axioms theorem surpassing relation must almost theorem key property fuzzy rings holds theorem reversibility holds except one pathological situation informal overview triples systems theorem criterion given terms sums squares isomorphic symmetrized triple one would want classification theorem systems reduces classical algebras standard supertropical semiring symmetrized semiring layered semirings power sets various hyperfields fuzzy rings several exceptional cases nonetheless theorem comes close namely first kind either characteristic height bipotent isomorphic layered system second kind either height except exceptional case isometric symmetrized semiring real classical information exceptions given remark continues rudiments linear algebra ground triple discussed shortly tropicalization cast terms functor systems classical nonclassical principle enables one define right tropical versions classical algebraic structures including exterior algebras lie algebras lie superalgebras poisson algebras contents linear algebra systems paper written objective understanding diverse theorems linear algebra semiring systems define set vectors nonempty subset row rank matrix maximal number rows tangible vector vector whose elements tangible matrix matrix whose rows tangible vectors matrix matrix nonsingular submatrix rank largest size nonsingular square submatrix corollary see submatrix rank matrix triple less equal row rank column rank improved theorem theorem let system vector vector adj satisfies particular invertible adj satisfies existence tangible subtler one considers valuations systems fibers call system ascending chain fibers stabilizes theorem corollary system invertible vector tangible vector adj one obtains uniqueness theorem using property called strong balance translating concepts language systems turn question raised privately hyperfields baker question submatrix rank equal row rank initial hope would always case analogy supertropical situation however gaubert observed nonsquare counterexample question already found underlying system even kind negation map critical rather general counterexample triples second kind given proposition essence example already exists sign although counterexample given nonsquare matrix modified matrix counterexample minimal sense question positive answer matrices mild assumption theorem nevertheless positive results available positive answer question along lines theorems given theorem systems satisfying certain technical conditions rowen theorem show question positive answer square matrices triples first kind height seems correct framework lift theorems classical algebra positive answer rectangular matrices given theorem restrictive hypotheses essentially reduce supertropical situation contents basic categorical considerations paper elaborates categorical aspects systems emphasis important functors functor triple definition embraces important constructions including symmetrized triple polynomial triples via convolution product given definition emphasis cancellative multiplicative monoid even group encompasses many major applications slights lie theory indeed one could consider hopf systems discussed briefly depth motivation found issue must confronted proper definition morphism definitions categories arising universal algebra one intuition would take homomorphisms maps preserve equality operators call morphisms however approach loses major examples hypergroups applications tropical mathematics hypergroups definition tend depend surpassing relation definition led broader definition called definition definition often provide correct venue studying ground systems hand proposition gives way verifying morphisms automatically strict situation stricter module systems ground triples sticky point semigroups morphisms mor categories necessarily groups traditional notion abelian category replaced definition lack fundamental properties abelian categories tensor product functorial restrict attention strict morphisms proposition module systems cases theory semirings modules homomorphisms described terms congruences congruences focus theory null congruences contain diagonal necessarily zero lead null morphisms definition alternate way viewing given hom studied congruences terms transitive modules together dual one gets desired categorical properties adjoint isomorphism lemma considering strict morphisms way categories comprised strict morphisms amenable categorical view carried geometry homology times hopfian flavor functors various categories arising theory described also eye towards valuations triples contents geometry paper focuses module systems leading geometric theory ground systems comprised following parts group lie semialgebra systems symmetrization functor lorscheid blueprints symmetrized triples localization theory module triples semiring triples geometrical category theory including representation theorem negation schemes semiringed spaces sheaves module systems hopf semialgebra approach ground systems module systems classical algebra prime systems definitions play important role affine geometry via zariski topology significant version fundamental theorem algebra theorem implies polynomial system prime system prime corollary informal overview triples systems contents homology work progress start version split epics weaker classical definition definition epic case also say module system sum subsystems every written leads projective module systems definition system projective strict epic systems every morphism lifts morphism sense system strict epic every morphism morphism sense fundamental properties obtained including basis lemma proposition leading resolutions dimension one obtains homology theory context homological categories derived functors connection recent work connes consani references adiprasito huh katz hodge theory matroids notices ams akian gaubert guterman linear independence tropical semirings beyond tropical idempotent mathematics litvinov sergeev eds contemp math akian gaubert guterman tropical cramer determinants revisited contemp math amer math soc akian gaubert rowen linear algebra systems preprint baker bowler matroids hyperfields aug baker bowler matroids partial hyperstructures preprint baker payne nonarchimedean tropical geometry simons symposia berkovich analytic geometry first steps geometry lectures arizona winter school university lecture series vol american mathematical society providence berkovich algebraic analytic geometry field one element bertram easton tropical nullstellensatz congruences advances mathematics bourbaki commutative algebra paris reading butkovic linear algebra combinatorics lin alg appl connes consani homological algebra characteristic one mar cortinas haesemeyer walker weibel toric varieties monoid schemes reine angew math costa sur des publ math decebren dress duality theory finite infinite matroids coefficients advances mathematics dress wenzel algebraic tropical fuzzy geometry beitrage zur algebra und contributions algebra und geometry etingof gelaki nikshych ostrik tensor categories mathematical surveys monographs volume american mathematical society gaubert des dans les diodes des mines paris gaubert plus methods applications max linear algebra reischuk morvan editors number lncs lubeck march springer giansiracusa jun lorscheid relation hyperrings fuzzy rings golan theory semirings applications mathematics theoretical computer science volume longman sci grandis homological algebra strongly settings world scientific grothendieck produits tensoriels topologiques espaces nucleaires memoirs amer math henry symmetrization hypergroups arxiv preprint itenberg kharlamov shustin welschinger invariants real del pezzo surfaces degree math annalen itenberg mikhalkin shustin tropical algebraic geometry oberwolfach seminars verlag basel izhakian tropical arithmetic algebra tropical matrices preprint arxiv izhakian knebusch rowen supertropical semirings supervaluations pure appl rowen izhakian knebusch rowen layered tropical mathematics journal algebra izhakian knebusch rowen categories layered semirings commun algebra izhakian knebusch rowen algebraic structures tropical mathematics tropical idempotent mathematics litvinov sergeev eds contemporary mathematics ams preprint izhakian niv rowen supertropical linear multilinear algebra appear izhakian rowen supertropical algebra advances mathematics izhakian rowen supertropical matrix algebra israel izhakian rowen supertropical matrix algebra solving tropical equations israel izhakian rowen supertropical polynomials resultants algebra jacobson basic algebra freeman jensen payne combinatorial inductive methods tropical maximal rank conjecture combin theory ser joo mincheva prime congruences idempotent semirings nullstellensatz tropical polynomials appear selecta mathematica jun algebraic geometry hyperrings arxiv jun cech cohomology semiring schemes arxiv preprint jun valuations semirings arxiv preprint appear journal pure applied algebra jun mincheva homology systems preparation jun categories negation arxiv jun rowen geometry systems katsov tensor products functors siberian math trans sirbiskii mathematischekii zhurnal katsov toward homological characterization semirings conjecture perfectness semiring context algebra universalis lorscheid geometry blueprints part algebraic background scheme theory adv math lorscheid blueprinted view absolute arithmetic european mathematical society maclagan sturmfels introduction tropical geometry american mathematical society graduate studies mathematics mckenzie mcnulty taylor algebras lattices varieties vol wadsworth brooks mikhalkin enumerative tropical algebraic geometry amer math soc patchkoria exactness long sequences homology semimodules journal homotopy related structures vol plus akian cohen gaubert nikoukhah quadrat max max des french max algebra symmetrization algebra balances acad sci paris phys chim sci univers sci terre ren shaw sturmfels tropicalization del pezzo surfaces advances mathematics rowen symmetries tropical algebra rowen algebras negation map pages hopf algebras associated group actions slides lecture acc conference combinatorics group actions saint john newfoundland august spec journal viro hyperfields tropical geometry hyperfields dequantization arxiv department mathematics university israel address rowen | 0 |
jun random forests industrial device functioning diagnostics using wireless sensor networks elghazel guyeux farhat hakem medjaher zerhouni bahi abstract paper random forests proposed operating devices diagnostics presence variable number features various contexts like large monitored areas wired sensor networks providing features achieve diagnostics either costly use totally impossible spread using wireless sensor network solve problem latter subjected flaws furthermore networks topology often changes leading variability quality coverage targeted area diagnostics sink level must take consideration number quality provided features constant politics like scheduling data aggregation may developed across network aim article show random forests relevant context due flexibility robustness provide first examples use method diagnostics based data provided wireless sensor network introduction machine learning classification refers identifying class new observation belongs basis training set quantifiable observations known properties ensemble learning classifiers combined solve particular computational intelligence problem many research papers encourage adapting solution improve performance model reduce likelihood selecting weak classifier instance dietterich argued averaging classifiers outputs guarantees better performance worst classifier claim theoretically proven correct fumera roli addition particular hypotheses fusion multiple classifiers improve performance best individual classifier two early examples ensemble classifiers boosting bagging boosting algorithm distribution training set changes adaptively based errors generated previous classifiers fact step higher degree importance accorded misclassified instances end training weight accorded classifier regarding individual performance indicating importance voting process bagging distribution training set changes stochastically equal votes accorded classifiers classifiers error rate decreases size committee increases comparison made tsymbal puuronen shown bagging consistent unable take account heterogeneity instance space highlight conclusion authors emphasize importance classifiers integration combining various techniques provide accurate results different classifiers behave manner faced particularities training set nevertheless classifiers give different results confusion may induced easy ensure reasonable results combining classifiers context use random methods could beneficial instead combining different classifiers random method uses classifier different distributions training set majority vote employed identify class article use random forests proposed industrial functioning diagnostics particularly context devices monitored using wireless sensor network wsn prerequisite diagnostics consider data provided sensors either flawless simply noisy however deploying wired sensor network monitored device costly situations specifically large scale moving hardly accessible areas monitor situations encompass nuclear power plants structure spread deep water desert wireless sensors considered cases due low cost easy deployment wsns monitoring somehow unique sense sensors subjected failures energy exhaustion leading change network topology thus monitoring quality variable depends time location device various strategies deployed network achieve fault tolerance extend wsn lifetime like nodes scheduling data aggregation however diagnostic processes must compatible strategies device coverage changing quality objective research work show achieve good compromise situation compatible number sensors may variable time susceptible errors precisely explain random methods relevant achieve accurate diagnostics industrial device monitored using wsn functioning recalled applied monitoring context algorithms provided illustration simulated wsn finally detailed remainder article organized follows section summarizes related work section overview research works industrial diagnostics present random forest algorithm section give simulation results section research work ends conclusion section contribution summarized intended future work provided related work many research works contributed improving classification accuracy instance tree ensembles use majority voting identify popular class advantage transforming weak classifiers strong ones combining knowledge reduce error rate usually growth tree governed random vectors sampled training set bagging one early examples method tree grown randomly selecting individuals training set without replacing use bagging motivated three main reasons enhances accuracy use random features gives ongoing estimates generalization error strength correlation combined trees also good unstable classifiers large variance meanwhile freund introduced adaptive boosting algorithm adaboost defined deterministic algorithm selects weights training set input next classifier based wrong classifications previous classifiers fact classifier focuses correcting errors new step remarkably improved accuracy classifications shortly randomness used grow trees split defined node searching best random selection features training set introduced random subspace randomly selects subset vectors features grow tree diettrich introduced random split selection node split randomly selected among best splits methods like bagging random vector sampled grow tree completely independent previous vectors generated distribution random split selection introducing random noise outputs gave better results bagging nevertheless algorithms implementing ways training set adaboost outperform two methods therefore breiman combined strengths methods detailed random forest algorithm method individuals randomly selected training set replacement node split selected reducing dispersion generated previous step consequently lowering error rate algorithm detailed section overview diagnostics constantly growing complexity current industrial systems witness costly downtime failures therefore efficient health ment technique mandatory fact order avoid expensive shutdowns maintenance activities scheduled prevent interruptions system operation early frameworks maintenance takes place either failure occurs corrective maintenance according predefined time intervals periodic maintenance nevertheless still generates extra costs due soon late maintenances accordingly considering actual health state operating devices important decision making process maintenance becomes performed system diagnosed certain health state diagnostics understanding relationship observe present happened past relating cause effect fault takes place detected anomaly reported system behavior fault isolated determining locating cause source problem component responsible failure identified extent current failure measured activity meet several requirements order efficient requirements enumerated following early detection order improve industrial systems reliability fault detection needs quick accurate nevertheless diagnostic systems need find reasonable quick response fault tolerance words efficient diagnostic system differentiate normal erroneous performances presence fault isolability fault isolation important step diagnostic process refers ability diagnostic system determine source fault identify responsible component isolability attribute system discriminate different failures anomaly detected set possible faults generated completeness aspect requires actual faults subset proposed set resolution optimization necessitates set small possible tradeoff needs found respecting accuracy diagnostics robustness resources highly desirable diagnostic system would degrade gracefully rather fail suddenly finality system needs robust noise uncertainties addition system performance computational complexity considered example diagnostics require low complexity higher storage capacities faults identifiability diagnostics system interest distinguish normal abnormal behaviors also crucial cause every fault identified also new observations malfunctioning would misclassified known fault normal behavior common present fault leads generation faults combining effects faults easy achieve due possible hand modeling faults separately may exhaust resources case large processes clarity diagnostic models human expertise combined together decision making support reliable therefore appreciated system explains fault triggered propagated keeps track relationship help operator use experience evaluate system understand decision making process adaptability operating conditions external inputs environmental conditions change time thus ensure relevant diagnostics levels system adapt changes evolve presence new information existent diagnostic models several limitations summarized table diagnostic model markovian process bayesian networks neural networks fuzzy systems drawbacks considered stages degradation process accounted volume data required training assumptions always practical transitions considered reliance accurate thresholds state transitions needed efficient results predict unanticipated states amount data training necessary every change conditions needed reduce inputs complexity every new entry experts required good developers understanding table limitations diagnostic models degradation process considered stochastic process evolution degradation random variable describes different levels system health state good condition complete deterioration deterioration process multistate divided two main categories space device considered failed predefined threshold reached space degradation process divided finite number discrete levels maintenance relies reliable scheduling maintenance activities understanding degradation process required finality paper consider space deterioration process random forests algorithm mainly combination bagging random subspace algorithms defined leo breiman combination tree predictors tree depends values random vector sampled independently distribution trees forest method resulted number improvements tree classifiers accuracy classifier maximizes variance injecting randomness variable selection minimizes bias growing tree maximum depth pruning steps constructing forest detailed algorithm algorithm random forest algorithm input labeled training set number trees number features output learned random forest initialize empty bootstrap initialize root tree repeat current node terminal affect class next unvisited node else select best feature among split add leftchild rightchild tree end nodes visited add tree forest end root tree contains instances training subset sorted corresponding classes node terminal contains instances one single class number instances representing class equal alternative case needs developed pruning purpose node feature guarantees best split selected follows information acquired choosing feature computed entropy shannon measures quantity information entropy log number examples associated position tree total number classes denotes fraction examples associated position tree labelled class proportion elements labelled class position gini index measures dispersion population gini random sample number classes denotes fraction examples associated position tree labelled class proportion elements labelled class position best split chosen computing gain information growing tree given position corresponding feature follows gain corresponds position tree denotes test branch proportion elements position position corresponds either entropy gini feature provides higher gain selected split node optimal training classification problem tree ensembles advantage running algorithm different starting points better approximate classifier paper leo breiman discusses accuracy random forests particular gave proof generalized error although different one application another always upper bound random forests converge injected randomness improve accuracy minimizes correlation maintaining strength tree ensembles investigated breiman use either randomly selected inputs combination inputs node grow tree methods interesting characteristics accuracy least good adaboost relatively robust outliers noise faster bagging boosting give internal estimates error strength correlation variable importance simple trees grown parallel four different levels diversity defined level best level worst level one classifier wrong pattern level majority voting always correct level least one classifier correct pattern level classifiers wrong pattern guarantee least level two reached fact trained tree selected contribute voting better random error rate generated corresponding tree less tree dropped forest verikas argue popular classifiers support vector machine svm multilayer perceptron mlp relevance vector machine rvm provide little insight variable importance derived algorithm compared methodologies random forest algorithm find cases outperform techniques large margin experimental study data collection paper consider two sets experiments sensor network constituted nodes sensing respectively levels temperature sensors pressure humidity industrial device consideration set experiment set experiments consider level correlation introduced betweent different features moreover suppose time normal conditions temperature sensors follow gaussian law parameter parameters mapped case malfunction industrial device finally sensors return value break gaussian parameters industrial device pressure sensors normal conditions parameters changed case industrial failure pressure sensors return broken finally humidity sensors produce data following gaussian law parameter sensing device parameters set case device failure malfunctioning humidity sensors produce value set experiment set linear correlation injected studied features normal conditions temperature sensors follow gaussian law parameter parameters mapped case malfunction industrial device finally sensors return value break industrial device pressure sensors normal conditions value pressure computed value temperature parameters changed case industrial failure pressure sensors return broken device humidity sensors produce data form parameters set case device failure malfunctioning humidity sensors produce value data sets probability failure occurs time follows bernoulli distribution parameter five levels functioning attributed category sensors depending abnormality sensed data levels defined thanks thresholds degrees temperature temperature lower normal sensed value larger highly related malfunctioning bars pressure parameter finally percents humidity data generated follows time unit industrial device monitoring category temperature pressure humidity sensors sensor belonging category yet detected device failure picks new data according gaussian law corresponding device depends random draw exponential law detailed previously realized determine breakdown occurs location placed else picks new datum according bernoulli distribution category sensor observing malfunctioning device global failure level set sensed data produced wireless sensor network given time defined follows sensed datum dti let fit functioning level related category pressure temperature humidity max fit random forest design figure example tree random forest random forest constituted set experiments trees defined follows tree sample dates extracted root tree tuple cardinality finite set thus coordinate corresponds number times device global failure sample observation dates category largest gain dates root node selected dates divided five sets depending thresholds related edges labeled failure levels added depicted figure directed new vertices containing tuples level equal words consider node dates functioning level category equal divide subsets depending global functioning levels tuple constituted cardinality subsets see fig process continued vertex new root reduced set observed dates categories minus stopped either categories regarded tuple node least components equal providing diagnostic new set observations finally given new set observations given time diagnostics industrial device obtained follows let tree forest visited starting root reaching leaf described edges connected root labeled category various failure levels selected edge one whose labeled level failure regarding corresponds failure observations obtained node following edge leaf global level failure observations according coordinate unique non zero component tuple tree walk continued item node new root global diagnostics given observation majority consensus responses trees forest numerical simulations training set obtained simulating observations successive times results instances resulting data base used train trees constitute trained random forest figure presents delay time system enters failure mode time detection done absence correlations different features time value delay negative values positive value refer time predictions early predictions late predictions failures respectively plotted values average result per number simulations varies time sensor nodes start fail order simulate missing data packets result algorithm able detect failures either time occurrence performed simulations calculated average number errors fault detection produced trees forest figure shows error rate remained simulation error rate includes early late detections certain sensor nodes stop functioning leads lack information impact quality predictions explains sudden increase error rate time conclude low error rate absence data packets increasing number trees helps improve quality accuracy predictions described section correlation introduced features figure shows number successful diagnostics number tree estimators forest changes shown figure method guarantees success rate number trees limited number grows accuracy method increases reach number trees around figure delay failure detection respect number simulations figure error rate diagnostics respect number simulations conclusion instead using wired sensor networks diagnostics health management method possible use wireless sensors use motivated cost reasons due specific particularities monitored device context changing number quality provided features use random forests may interest random classifiers recalled details article reason behind use context wireless sensors network monitoring explained finally algorithms first examples use random forests diagnostics using wireless sensor network provided simulation results showed algorithm guarantees certain level accuracy figure number successful diagnostics respect number trees even data packets missing future work authors intention compare various tools diagnostics random forests either considering wireless sensor networks wired ones comparisons carried theoretical practical aspects algorithm random forests part extended achieve prognostics health management finally method diagnosing industrial device tested life size model illustrate effectiveness proposed approach references yali amit donald geman shape quantization recognition randomized trees neural computation leo breiman bagging predictors machine learning leo breiman using adaptive bagging debias regressions technical report statics department ucb leo breiman random forests machine learning sourabh dash venkat venkatasubramanian challenges industrial applications fault diagnostic systems computers chemical engineering thomas dietterich experimental comparison three methods constructing ensembles decision trees bagging boosting randomization machine learning freund schapire experiments new boosting algorithm proceedings thirteenth international conference machine learning pages giorgio fumera fabio roli theoretical experimental analysis linear combiners multiple classifier systems ieee transactions pattern analysis machine intelligence tin kam random subspace method constructing decision forests ieee transactions pattern analysis machine intelligence shigeru kanemoto norihiro yokotsuka noritaka yusa masahiko kawabata diversity integration rotating machine health monitoring methods chemical engineering transactions number pages milan italy ramin moghaddass ming zuo integrated framework online diagnostic prognostic health monitoring using multistate deterioration process reliability engieneering system safety robert schapire brief introduction boosting proceedings sixteenth international joint conference artificial intelligence sharkey sharkey combining diverse neural nets knowledge egineering review alexey tsymbal seppo puuronen bagging boosting dynamic integration classifiers european conference principles practice knowledge discovery data bases pkdd pages tumer ghosh error correlation error reduction ensemble classifiers connection science verikas gelzinis bacauskiene mining data random forests survey results new tests pattern recognition | 2 |
improvements deep convolutional neural networks lvcsr tara brian george george hagen tomas aleksandr bhuvana dec ibm watson research center yorktown heights department computer science university toronto tsainath bedk gsaon hsoltau tberan saravkin bhuvana asamir gdahl abstract deep convolutional neural networks cnns powerful deep neural networks dnn able better reduce spectral variation input signal also confirmed experimentally cnns showing improvements word error rate wer relative compared dnns across variety lvcsr tasks paper describe different methods improve cnn performance first conduct deep analysis comparing limited weight sharing full weight sharing features second apply various pooling strategies shown improvements computer vision lvcsr speech task third introduce method effectively incorporate speaker adaptation namely fmllr features fourth introduce effective strategy use dropout sequence training find improvements particularly fmllr dropout able achieve additional relative improvement wer broadcast news task previous best cnn baseline larger task find additional relative improvement previous best cnn baseline introduction deep neural networks dnns acoustic modeling speech recognition showing tremendous improvements order relative across variety small large vocabulary tasks recently deep convolutional neural networks cnns explored alternative type neural network reduce translational variance input signal example deep cnns shown offer relative improvement dnns across different lvcsr tasks cnn architecture proposed somewhat vanilla architecture used computer vision many years goal paper analyze justify appropriate cnn architecture speech investigate various strategies improve cnn results first architecture proposed used multiple convolutional layers full weight sharing fws found beneficial compared single fws convolutional layer locality speech known ahead time proposed use limited weight sharing lws cnns speech lws benefit allows local weight focus parts signal confusable previous work lws focused single lws layer work detailed analysis compare multiple layers fws lws second numerous improvements cnns computer vision particularly small tasks example using stochastic pooling provides better generalization max pooling used second using overlapping pooling pooling time also improves generalization test data furthermore cnns combining outputs different layers neural network also successful computer vision explore effectiveness strategies larger scale speech tasks third investigate using better features cnns features cnns must exhibit locality time frequency found features best cnns however speaker adapted features feature space maximum likelihood linear regression fmllr features typically give best performance dnns fmllr transformation applied directly correlated space however improvement observed fmllr transformations typically assume uncorrelated features paper propose methodology effectively use fmllr features involves transforming uncorrelated space applying fmllr space transforming new features back correlated space finally investigate role rectified linear units relu dropout sequence training cnns shown give good performance trained dnns employed however critical speech recognition performance providing additional relative gain dnn training dropout mask changes utterance however training guaranteed get conjugate directions dropout mask changes utterance therefore order make dropout usable keep dropout mask fixed per utterance iterations conjugate gradient within single iteration results proposed strategies first explored english broadcast news task find difference lws fws multiple layers lvcsr task second find various pooling strategies gave improvements computer vision tasks help much speech third observe improving cnn input features including fmllr gives improvements wer finally fixing dropout mask iterations lets use dropout sequence training avoids destroying gains dropout accrued training putting together improvements fmllr dropout find able obtain relative reduction wer compared cnn system proposed addition larger task also achieve relative improvement wer rest paper organized follows section describes basic cnn architecture serves starting point proposed modifications section discuss experiments pooling fmllr section presents results proposed improvements task finally section concludes paper discusses future work basic cnn architecture section describe basic cnn architecture introduced serve baseline system improve upon found two convolutional layers four fully connected layers optimal lvcsr tasks found pooling size appropriate first convolutional layer pooling used second layer furthermore convolutional layers feature maps respectively fully connected layers hidden units optimal feature set used filterbank coefficients including delta double delta using architecture cnns able achieve relative improvement dnns across many different lvcsr tasks paper explore feature architecture optimization strategies improve cnn results preliminary experiments performed english broadcast news task acoustic models trained hours english broadcast news speech corpora results reported ears set unless otherwise noted cnns trained results reported hybrid setup analysis various strategies lvcsr optimal feature set convolutional neural networks require features locally correlated time frequency implies linear discriminant analysis lda features commonly used speech used cnns remove locality frequency mel features one type speech feature exhibit locality property explore additional transformations applied features improve wer table shows wer function input feature cnns following observed using help map features canonical space offers improvements using fmllr input help one reason could fmllr assumes data well modeled diagonal model would work best decorrelated features however mel features highly correlated using delta capture timedynamic information feature helps using energy provide improvements conclusion appears mel optimal input feature set use feature set used remainder experiments unless otherwise noted feature mel mel mel fmllr mel mel energy wer table wer function input feature number convolutional fully connected layers cnn work image recognition makes use convolutional layers fully connected layers convolutional layers meant reduce spectral variation model spectral correlation fully connected layers aggregate local information learned convolutional layers class discrimination however cnn work done thus far speech introduced novel framework modeling spectral correlations framework allowed single convolutional layer adopt spatial modeling approach similar image recognition work explore benefit including multiple convolutional layers table shows wer function number convolutional fully connected layers network note experiment number parameters network kept table shows increasing number convolutional layers helps performance starts deteriorate furthermore see table cnns offer improvements dnns input feature set convolutional fully connected layers conv full dnn conv full conv full conv full wer table wer function convolutional layers number hidden units cnns explored image recognition tasks perform weight sharing across pixels unlike images local behavior speech features low frequency different features high frequency regions addresses issue limiting weight sharing frequency components close words low high frequency components different weights filters however type approach limits adding additional convolutional layers filter outputs different pooling bands related argue apply weight sharing across time frequency components using large number hidden units compared vision tasks convolutional layers capture differences low high frequency components type approach allows multiple convolutional layers something thus far explored speech table shows wer function number hidden units convolutional layers total number parameters network kept constant experiments observe increase number hidden units wer steadily decreases increase number hidden units past would require reduce number hidden units fully connected layers less order keep total number network parameters constant observed reducing number hidden units results increase wer able obtain slight improvement using hidden units first convolutional layer second layer hidden units convolutional layers typically used vision tasks many hidden units needed capture locality differences different frequency regions speech number hidden units wer table wer function hidden units limited full weight sharing speech recognition tasks characteristics signal lowfrequency regions different high frequency regions allows limited weight sharing lws approach used convolutional layers weights span small local region frequency lws benefit allows local weight focus parts signal confusable perform discrimination within small local region however one drawbacks requires setting hand frequency region filter spans furthermore many lws layers used limits adding additional sharing convolutional layers filter outputs different bands related thus locality constraint required convolutional layers preserved thus work lws point looked lws one layer alternatively full weight sharing fws idea convolutional layers explored similar done image recognition community approach multiple convolutional layers allowed shown adding additional convolutional layers beneficial addition using large number hidden units convolutional layers better captures differences low high frequency components since multiple convolutional layers critical good performance wer paper explore lws multiple layers specifically activations one lws layer locality preserving information fed another lws layer results comparing lws fws shown table note results stronger features opposed previous lws work used simpler lws fws used convolutional layers found optimal first notice increase number hidden units fws improvement wer confirming belief hidden units fws important help explain variations frequency input signal second find use lws match number parameters fws get slight improvements wer seems lws fws offer similar performance fws simpler implement choose filter locations limited weight ahead time prefer use fws fws parameters hidden units per convolution layer gives best tradeoff wer number parameters use setting subsequent experiments pooling experiments pooling important concept cnns helps reduce spectral variance input features similar explore method fws fws fws fws lws lws hidden units conv layers params wer table limited full weight sharing pooling frequency time shown optimal speech pooling dependent input sampling rate speaking style compare best pooling size two different tasks different characteristics namely speech switchboard telephone conversations swb speech english broadcast news table indicates pooling essential cnns tasks optimal pooling size note run experiment pooling already shown help swb pooling table wer pooling type pooling pooling important concept cnns helps reduce spectral variance input features work explored using max pooling pooling strategy given pooling region set activations operation shown equation max one problems overfit training data necessarily generalize test data two pooling alternatives proposed address problems pooling stochastic pooling pooling looks take weighted average activations pooling region shown equation seen simple form averaging corresponds one problems average pooling elements pooling region considered areas may downweight areas high activation pooling seen tradeoff average pooling shown give large improvements error rate computer vision tasks compared max pooling stochastic pooling another pooling strategy addresses issues max average pooling stochastic pooling first set probabilities region formed normalizing activations across region shown equation multinomial distribution created probabilities distribution sampled based pick location corresponding pooled activation shown equation stochastic pooling advantages prevents overfitting due stochastic component stochastic pooling also shown huge improvements error rate computer vision given success stochastic pooling compare strategies lvcsr task results three pooling strategies shown table stochastic pooling seems provide improvements max pooling though gains slight unlike vision tasks appears tasks speech recognition lot data thus better model estimates generalization methods stochastic pooling offer great improvements max pooling method max pooling stochastic pooling pooing wer table results different pooling types overlapping pooling work presented explore overlapping pooling frequency however work computer vision shown overlapping pooling improve error rate compared pooling one motivations overlapping pooling prevent overfitting table compares overlapping pooling lvcsr speech task one thing point overlapping pooling many activations order keep experiment fair number parameters overlapping pooling matched table shows difference wer overlapping pooling tasks lot data speech regularization mechanisms overlapping pooling seem help compared smaller computer vision tasks method pooling overlap pooling overlap wer table pooling without overlap pooling time previous cnn work speech explored pooling frequency though investigate cnns pooling time frequency however cnn work vision performs pooling space time paper deeper analysis pooling time speech one thing must ensure pooling time speech overlap pooling windows otherwise pooling time without overlap seen subsampling signal time degrades performance pooling time overlap thought way smooth signal time another form regularization table compares pooling time max stochastic pooling see pooling time helps slightly stochastic pooling however gains large likely diminished sequence training appears large tasks data regularizations pooling time helpful similar regularization schemes pooling pooling overlap frequency method baseline pooling time max pooling time stochastic pooling time wer table pooling time incorporating cnns section describe various techniques incorporate speaker adapted features cnns fmllr features since cnns model correlation time frequency require input feature space property implies commonly used feature spaces linear discriminant analysis used cnns shown good feature set cnns filter bank coefficients maximum likelihood linear regression fmllr popular technique used reduce variability speech due different speakers fmllr transformation applied features assumes either features uncorrelated modeled diagonal covariance gaussians features correlated modeled full covariance gaussians correlated features better modeled gaussians matrices dramatically increase number parameters per gaussian component oftentimes leading parameter estimates robust thus fmllr commonly applied decorrelated space fmllr applied correlated feature space diagonal covariance assumption little improvement wer observed covariance matrices stcs used decorrelate feature space modeled diagonal gaussians stc offers added benefit allows full covariance matrices shared many distributions distribution diagonal covariance matrix paper explore applying fmllr correlated features first decorrelating appropriately use diagonal gaussian approximation fmllr transform fmllr features back correlated space used cnns algorithm described follows first starting correlated feature space estimate stc matrix map features uncorrelated space mapping given transformation next uncorrelated space fmllr matrix estimated applied stc transformed features shown transformation msf thus far transformations demonstrate standard transformations speech stc fmllr matrices however speech recognition tasks features decorrelated stc transformation fmllr fbmmi applied decorrelated space shown transformation features never transformed back correlated space however cnns using correlated features critical multiplying fmllr transformed features inverse stc matrix map decorrelated fmllr features back correlated space used cnn transformation propose given transformation msf information captured layer neural network varies general specific concepts example speech lower layers focus speaker adaptation higher layers focus discrimination section look combine inputs different layers neural network explore complementarity different layers could potentially improve results idea known neural networks explored computer vision specifically look combining output fullyconnected convolutional layers output fed layers entire network trained jointly thought combining features generated network note experiment input feature features used dnn cnn streams results shown table small gain observed combining dnn cnn features much smaller gains observed computer vision however given small improvement comes cost large parameter increase gains achieved increasing feature maps cnn alone see table see huge value idea possible however combining cnns dnns different types input features complimentary could potentially show improvements order optimization method critical performance gains sequence training compared optimization though important rectified linear units relu dropout recently proposed way regularize large neural networks fact shown provide relative reduction wer dnns english broadcast news lvcsr task however subsequent sequence training used dropout erased gains performance similar dnn trained sigmoid dropout given importance neural networks paper propose strategy make dropout effective sequence training results presented context cnns though algorithm also used dnns training one popular order technique dnns optimization let denote network parameters denote loss function denote gradient loss respect parameters denote search direction denote hessian approximation matrix characterizing curvature loss around central idea optimization iteratively form quadratic approximation loss minimize approximation using conjugate gradient iteration algorithm first gradient computed using training examples second since hessian computed exactly curvature matrix approximated damped version matrix set via conjugate gradient run relative progress made minimizing objective function falls certain tolerance iteration products computed sample training data results dropout results proposed fmllr idea shown table notice applying fmllr decorrelated space achieve improvement baseline system gain possible fmllr applied directly correlated features dropout popular technique prevent neural network training specifically operation neural network training dropout omits hidden unit randomly probability prevents complex hidden units forcing hidden units depend units specifically using dropout activation layer given equation input layer weight layer bias activation function relu binary mask entry drawn bernoulli distribution probability since dropout used decoding factor used training ensures test time units dropped correct total input reach layer feature proposed fmllr wer table wer improved fmllr features rectified linear units dropout ibm two stages neural network training performed first dnns trained stochastic gradient descent sgd criterion second dnn weights using objective function since speech task objective appropriate speech recognition problem numerous studies shown sequence training provides additional relative improvement trained dnn using combining dropout conjugate gradient tries minimize quadratic objective function given equation iteration damped gaussnetwon matrix estimated using subset training data subset fixed iterations data used estimate changes longer guaranteed conjugate search directions iteration iteration recall dropout produces random binary mask presentation training instance however order guarantee good conjugate search directions given utterance dropout mask per layer change appropriate way incorporate dropout allow dropout mask change different layers different utterances fix iterations working specific layer specific utterance although masks refreshed iterations number network parameters large saving dropout mask per utterance layer infeasible therefore randomly choose seed utterance layer save using randomize function seed guarantees dropout mask used per utterance results experimentally confirm using dropout probability layers reasonable dropout layers zero experiments use hidden units fully connected layers found beneficial dropout compared hidden units results different dropout techniques shown table notice dropout used wer sigmoid result also found dnns using dropout fixing dropout mask per utterance across iterations achieve improvement wer finally compare varying dropout mask per training iteration wer increases investigation figure shows vary dropout mask slow convergence loss training particularly number iterations increases later part training shows experimental evidence dropout mask fixed guarantee iterations produce conjugate search directions loss function sigmoid relu dropout relu dropout fixed iterations relu dropout per iteration wer table wer sequence training dropout training closely linked speech recognition objective function compared using fact explore many iterations actually necessary moving training table shows wer different iterations corresponding wer training note training started lattices dumped using weight stopped notice annealing two times achieve wer training compared weights converge points fact spending much time unnecessary weights relatively decent space better jump sequence training closely matched speech objective function iter times annealed wer wer table seq training wer per iteration results section analyze cnn performance additions proposed section namely fmllr relu dropout results shown english broadcast news task english broadcast news experimental setup following setup hybrid dnn trained using speakeradapted features input context frames dnn hidden units per layer sixth softmax layer output targets used dnns followed training feature system also trained architecture uses output targets pca applied top dnn softmax reduce dimensionality using features apply gmm training followed feature discriminative training using bmmi criterion order fairly compare results dnn hybrid system mllr applied dnn featurebased system old cnn systems trained features sigmoid proposed systems trained fmllr features described section discussed section dropout fixed per dropout varied per results loss iteration fig loss dropout techniques finally explore reduce number iterations moving sequence training main advantage sequence table shows performance proposed feature hybrid systems compares dnn old cnn systems proposed cnn hybrid system offers relative improvement dnn hybrid relative improvement old cnn hybrid system proposed cnnbased feature system offers modest improvement old feature system slight improvements featurebased system surprising observed huge relative improvements wer hybrid sequence trained dnn output targets compared hybrid dnn however features extracted systems gains diminish relative systems use neural network learn feature transformation seem saturate performance even hybrid system used extract features improves thus table shows potential improve hybrid system opposed system model hybrid dnn old hybrid cnn proposed hybrid cnn features old features proposed features table wer broadcast news hours english broadcast news hinton deng dahl mohamed jaitly senior vanhoucke nguyen sainath kingsbury deep neural networks acoustic modeling speech recognition ieee signal processing magazine vol lecun bengio convolutional networks images speech handbook brain theory neural networks mit press mohamed jiang penn applying convolutional neural network concepts hybrid model speech recognition proc icassp sainath mohamed kingsbury ramabhadran deep convolutional neural networks lvcsr proc icassp deng deep convolutional neural network using heterogeneous pooling trading acoustic invariance phonetic confusion proc icassp sermanet chintala lecun convolutional neural networks applied house numbers digit classification pattern recognition icpr international conference experimental setup explore scalability proposed techniques hours english broadcast news development done darpa ears set testing done darpa ears evaluation set dnn hybrid system uses fmllr features context use five hidden layers containing sigmoidal units feature system trained output targets hybrid system output targets results reported sequence training proposed systems trained fmllr features described section discussed section results table shows performance proposed cnn system compared dnns old cnn system proposed feature system improve wer old cnn wer performance slightly deteriorates cnnbased features extracted network however cnn offers relative improvement dnn hybrid system relative improvement old features systems helps strengthen hypothesis hybrid cnns potential improvement proposed fmllr techniques provide substantial improvements dnns cnns sigmoid features model hybrid dnn features old features proposed features proposed hybrid cnn references table wer broadcast news hrs conclusions paper explored various strategies improve cnn performance incorporated fmllr cnn features also made dropout effective sequence training also explored various pooling weight sharing techniques popular computer vision found offer improvements lvcsr tasks overall proposed ideas able improve previous best cnn results relative zeiler fergus stochastic pooling regularization deep convolutional neural networks proc international conference representaiton learning iclr krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing systems lecun huang bottou learning methods generic object recognition invariance pose lighting proc cvpr gales maximum likelihood linear transformations hmmbased speech recognition computer speech language vol kingsbury sainath soltau scalable minimum bayes risk training deep neural network acoustic models using distributed optimization proc interspeech dahl sainath hinton improving deep neural networks lvcsr using rectified linear units dropout proc icassp waibel hanazawa hinton shikano lang phoneme recognition using neural networks ieee transactions acoustics speech signal processing vol gales covariance matrices hidden markov models ieee transactions speech audio processing vol kingsbury optimization sequence classification criteria acoustic modeling proc icassp hinton srivastava krizhevsky sutskever salakhutdinov improving neural networks preventing coadaptation feature detectors computing research repository corr vol martens deep learning via optimization proc intl conf machine learning icml sainath kingsbury ramabhadran bottleneck features using deep belief networks proc icassp | 9 |
model identification via physics engines improved policy search oct shaojun zhu andrew kimmel kostas bekris abdeslam boularias paper presents practical approach identifying unknown mechanical parameters mass friction models manipulated rigid objects actuated robotic links succinct manner aims improve performance policy search algorithms key features approach use physics engines adaptation bayesian optimization framework purpose physics engine used reproduce simulation experiments performed real robot mechanical parameters simulated system automatically simulated trajectories match real ones optimized model used learning policy simulation safely deploying real robot given limitations physics engines modeling objects generally possible find mechanical model reproduces simulation real trajectories exactly moreover many scenarios policy found without perfect knowledge system therefore searching perfect model may worth computational effort practice proposed approach aims identify model good enough approximate value locally optimal policy certain confidence instead spending computational resources searching accurate model empirical evaluations performed simulation real robotic manipulation task show model identification via physics engines significantly boost performance policy search algorithms popular robotics trpo power pilco additional data introduction paper presents approach model identification exploiting availability physics engines used simulating dynamics robots objects interact many examples popular physics engines becoming increasingly efficient physics engines take inputs mechanical mesh models objects particular environment addition forces torques applied different return predictions objects would move accuracy predicted motions depends several factors first one limitation mathematical model used engine coulomb law friction second factor accuracy numerical algorithm used solving differential equations motion finally prediction depends accuracy mechanical parameters robot objects models mass friction elasticity work focus puter authors science department rutgers university new jersey comusa baxter robot needs pick bottle reach motoman robot motoman gently pushes object locally without risking lose observed motions mechanical properties object identified via physics engine object pushed baxter workspace using policy learned simulation identified property parameters fig last factor propose method improving accuracy mechanical parameters used physical simulations motivation consider setup illustrated figure static robot motoman assists another one baxter reach pick desired object bottle object known robots capability pick however object reached motoman baxter due considerable distance two static robots intersection reachable workspace empty restricts execution direct case motoman robot must learn action rolling sliding would move bottle distant target zone robot simply executes maximum velocity push object result causes object fall table similarly object rolled slowly could end stuck region two robot workspaces neither could reach outcomes undesirable would ruin autonomy system require human intervention reset scene perform action example highlights need identifying object mechanical model predict object would end table given different initial velocities optimal velocity could derived accordingly using simulations technique presented paper aims improving accuracy mechanical parameters used physics engines order perform given robotic task given recorded real trajectories object question search best model parameters simulated trajectories close possible observed real trajectories search performed anytime bayesian optimization probability distribution belief optimal model repeatedly updated time consumed optimization exceeds certain preallocated time budget optimization halted model highest probability returned policy search subroutine takes returned model finds policy aims perform task policy search subroutine could control method lqr reinforcement learning algorithm runs physics engine identified model instead real world sack also safety obtained policy deployed robot run real world new observed trajectories handed back model identification module repeat process question arises accurate identified model order find optimal policy instead spending significant amount time searching accurate model would useful stop search whenever model sufficiently accurate task hand found answering question exactly difficult would require knowing advance optimal policy model case model identification process stopped simply consensus among likely models policy optimal solution problem motivated key quality desired robot algorithms ensure safety robot algorithms constrain changes policy two iterations minimal gradual instance policy search reps trust region policy optimization trpo algorithms guarantee distance updated policy previous one learning loop bounded predefined constant therefore one practice use previous best policy proxy verify consensus among likely models best policy next iteration policy search justified fact new policies different previous one policy search model identification process stopped whenever likely models predict almost value previous policy terms models reached high probability anytime optimization predict value previous policy models could used searching next policy current paper provide theoretical guarantees proposed method empirical evaluations show indeed improve performance several algorithms first part experiments performed systems mujoco simulator second part performed robotic task shown figure elated ork two approaches exist learning perform tasks systems unknown parameters ones methods search policy best solves task without explicitly learning system dynamics methods accredited recent success stories video games example robot learning reps algorithm used successfully train robot play table tennis power algorithm another policy search approach widely used learning motor skills trust region policy optimization trpo algorithm arguably technique policy search also achieved bayesian optimization used gait optimization central pattern generators popular policy parameterization methods however tend require lot training data also jeopardize safety robot approaches alternatives explicitly learn unknown parameters system search optimal policy accordingly many examples approaches robotic manipulation used simulation predict effects pushing flat objects smooth surface nonparametric approach employed learning outcome pushing large objects pilco algorithm proven efficient utilizing small amount data learn dynamical models optimal policies several cognitive models combine bayesian inference approximate knowledge newtonian physics proposed recently common characteristic many methods fact learn transition function using purely statistical approach without taking advantage known equations motion narmax example popular model identification techniques specifically designed dynamics contrast methods use physics engine concentrate identifying mechanical properties objects instead learning laws motion scratch also work identifying sliding models objects using optimization clear however methods would perform since tailored specific tasks pushing planar objects unlike proposed general approach increasingly popular alternative addresses challenges learning involves demonstration successful examples physical interaction learning direct mapping sensing input controls desirable result approaches usually require many physical experiments effectively learn recent works also proposed use physics engines combination experiments boost policy search algorithms although methods explicitly identify mechanical models objects key contribution current work linking modelidentification process policy search process instead searching accurate model search model accurate enough predict value function policy different searched policy therefore proposed approach used combination policy search algorithm guarantees smooth changes learned policy physical interaction force simulation error position model distribution simulate model simulation error model distribution position simulate model simulation error simulation error simulate model error model distribution err obs ith oce err obs ith oce use final model distribution find force new state actual observed pose time error pose time position learning mechanical model object bottle physical simulations key idea search model closes gap simulation physics engine reality using anytime bayesian optimization search stops models highest probabilities predict similar values given policy process repeated figure every action practice efficient model identification certain number fig iii roposed pproach start overview model identification policy search system present main algorithm explain model identification part details system overview notations figure shows overview proposed approach example focused manipulation application approach used identify physical properties actuated robotic links object manipulation problems figure targeted mechanical properties correspond object mass static kinetic friction coefficients different regions object surface object divided regular grid allows identify friction parameters part grid physical properties concatenated single vector represented vector space possible values physical properties discretized regular grid resolution proposed approach returns distribution discretized instead single point since model identification generally problem terms multiple models explain observed movement object equal accuracies objective preserve possible explanations probabilities online model identification algorithm takes input prior distribution discretized space physical properties calculated based initial distribution sequence observations instance case object manipulation pose position orientation manipulated object time vector describing force applied robot fingertip object time applying force results changing object pose algorithm returns distribution models robot task specified reward function maps real numbers policy returns action state value policy given model defined fixed horizon given starting state predicted state time simulating force state using physical parameters simplicity focus systems deterministic dynamics main algorithm given reward function simulator model parameters many techniques used searching policy maximizes value example one use differential dynamic programming ddp monte carlo methods simply run modelfree algorithm simulator system highly nonlinear good policy found former methods choice particular policy search method open depends task main loop system presented algorithm consists repeating three main steps data collection using real robot model identification using simulator policy search simulation using best identified model model identification process explained algorithm consists simulating effects forces object states initialize distribution uniform distribution initialize policy repeat execute policy iterations real robot collect new data run algorithm collected data reference policy updating distribution initialize policy search algorithm trpo run algorithm simulator model arg find improved policy timeout algorithm main loop various values parameters observing resulting states accompanying implementation using bullet mujoco physics engines purpose goal identify model parameters make outcomes simulation close possible real observed outcomes terms following optimization problem solved arg min wherein observed states object times force moved object predicted state time simulating force state using simulations computationally expensive therefore important minimize number simulations evaluations function searching optimal parameters solve equation solve problem using entropy search technique method wellsuited purpose explicitly maintains belief optimal parameters unlike bayesian optimization methods expected improvement maintain belief objective function following explain technique adapted purpose show keeping distribution models needed deciding stop optimization error function analytical form gradually learned sequence simulations small number parameters choose parameters efficiently way quickly leads accurate parameter estimation belief actual error function maintained belief probability measure space functions represented gaussian process mean vector covariance matrix mean covariance learned data points selected vector physical properties object accumulated distance actual observed states states obtained simulation using input data discretized space possible values physical properties reference policy minimum maximum number evaluated models kmin kmax model confidence threshold value error threshold output probability distribution sample uniform stop alse repeat calculating accuracy model simulate using physics engine physical parameters get predicted next state end calculate error function using data monte carlo sampling sample foreach end selecting next model evaluate checking stopping condition arg log kmin arg calculate values models probability using physics engine simulating trajectories models stop true end end kmax stop true end stop true algorithm model identification probability distribution identity best physical model returned algorithm computed learned arg min heaviside step function otherwise probability function according learned mean covariance intuitively expected number times happens minimizer function distributed according density distribution equation closedform expression therefore monte carlo sampling employed estimating process samples vectors containing values could take according learned gaussian process discretized space estimated counting ratio sampled vectors values simulation error happens make lowest error indicated equation algorithm finally computed distribution used select next vector use physical model simulator process repeated entropy drops certain threshold algorithm runs allocated time budget entropy given log pmin entropy close zero mass distribution concentrated around single vector corresponding physical model best explains observations hence next selected entropy would decrease adding data point train using new mean covariance equation entropy search methods follow reasoning use sample potential choice number values could take according order estimate expected change entropy choose parameter vector expected decrease entropy existence secondary nested process sampling makes method unpractical online optimization instead present simple heuristic choosing next method call greedy entropy search next chosen point contributes entropy arg max log selection criterion greedy anticipate output simulation using would affect entropy nevertheless criterion selects point causing entropy high point good chance also high uncertainty log found first experiments heuristic version entropy search practical original entropy search method computationally expensive nested sampling loops used original method stopping condition algorithm depends predicted value reference policy reference policy one used main algorithm algorithm starting point policy search identified model also policy executed previous round main algorithm many policy search algorithms reps trpo guarantee divergence consecutive policies minimal therefore difference two given models smaller threshold difference also smaller threshold function full proof conjecture subject upcoming work practice means two models high probabilities point continuing bayesian optimization find one two models actually accurate models result similar policies argument could used two models high probabilities tasks one motivation example figure policy used data collection significantly different policy used actually performing task policy used collect data consists moving object slowly without risking make move away reachable workspace motoman otherwise human intervention would needed optimal policy hand consists striking object certain high velocity therefore policy used proxy optimal policy algorithm instead use actual optimal policy respect likely model arg turns finding optimal policy given model specific task performed quickly simulation searching space discretized striking velocities case complex systems searching optimal policy computationally expensive reason use previous best policy surrogate next best policy checking stopping condition xperimental esults proposed model identification vgmi approach validated simulation real robotic manipulation task compared methods experiments benchmarks simulation setup simulation experiments done openai gym figure mujoco physics simulator space unknown physical models described inverted pendulum pendulum connected cart moves linearly dimensionality space two one mass pendulum one cart swimmer swimmer planar robot space three dimensions one mass link hopper hopper planar robot thus dimensionality parameter space four walker planar biped robot thus dimensionality parameter space seven environments use simulator default mass real system increase decrease masses ten fifty percent randomly create inaccurate simulators use prior models section policies trained trust region policy optimization trpo implemented rllab policy network two hidden layers neurons swimmer hopper fig openai gym systems used experiments entropy search greedy entropy search trajectory error meters time seconds fig model identification inverted pendulum environment using two variants entropy search start comparing greedy entropy search ges original entropy search problem identifying mass parameters inverted pendulum system rollout trajectories collected using optimal policies learned real system given inaccurate simulators control sequence rollouts try identify mass parameters enables simulator generate trajectories close real ones figure shows ges converges faster similar behaviors observed systems reported space sake refer main algorithm detailed algorithm section starts inaccurate simulator vgmi gradually increases accuracy simulator compare trpo trained directly real system trpo trained inaccurate simulators depending problem difficulty vary number iterations policy optimization trpo real system inaccurate simulators run inverted pendulum interactions swimmer iterations hopper iterations iterations run vgmi detailed algorithm every iterations algorithm run iterations inverted pendulum iterations swimmer iterations hopper iterations results mean variance independent trials statistical significance results report performance terms number rollouts real system total training time number rollouts represents data efficiency policy search algorithms corresponds actual number trajectories real system total training time total simulation policy optimization time used trpo converge also includes time spent model identification figure shows mean cumulative reward per rollout trajectory real systems functions number rollouts used training four tasks requires less rollouts rollouts used vgmi identify optimal mass parameter simulator policy search used directly policy search trpo results show models identified vgmi accurate enough trpo find good policy using amount data figure shows cumulative reward per trajectory real system function total time seconds also report performance trpo trained inaccurate simulators worse trained directly real system real system also simulator different physical parameters clearly shows advantage model identification data policy search slower trpo extra time spent model identification policy search learned simulator summary vgmi boosts trpo identifying parameters objects using physics engine identified parameters search policy deploying real system hand vgmi adds computational burden trpo manipulation experiments real robot setup task experiment push bottle one meter away one side table shown figure goal find optimal policy parameter representing pushing velocity robotic hand pushing direction always towards target position hand pushes object geometric center data collection human effort needed reset scene velocity pushing direction controlled object always workspace robotic hand specifically pushing velocity limit set pushing direction always towards center workspace proposed approach iteratively searches best pushing velocity uniformly sampling different velocities simulation identifies object model parameters mass friction coefficient using trajectories rollouts running vgmi algorithm experiment run vgmi rollout algorithm method compared two reinforcement learning methods power pilco power reward function dist distance object position pushing desired target fig cumulative reward per trajectory function number trajectories real system trajectories second simulator identified models counted occur real system fig cumulative reward per trajectory function total time seconds including search optimization times fig examples experiment motoman pushes object baxter workspace figure provides real robotic experiment motoman robot proposed method achieves lower final object location error fewer number object drops comparing alternatives reduction object drops especially important autonomous robot learning minimizes human effort learning approach power results higher location error object drops pilco performs better power also learns dynamical model addition policy model may accurate physics engine identified parameters simple policy search method used vgmi performance expected better advanced policy search methods combining power vgmi power pilco vgmi power pilco vgmi times object falls table results two metrics used evaluating performance distance final object location pushed desired goal location number times object falls table video experiments found supplementary video https location error meters position pilco state space object position number trials number trials fig pushing policy optimization results using motoman robot method vgmi achieves lower final object location error fewer object drops comparing alternatives best viewed color onclusion paper presents practical approach integrates physics engine bayesian optimization model identification increase data efficiency reinforcement learning algorithms model identification process taking place parallel reinforcement learning loop instead searching accurate model objective identify model accurate enough predict value function policy different current optimal policy therefore proposed approach used combination policy search algorithm guarantees smooth changes learned policy simulated real robotic manipulation experiments show proposed technique model identification decrease number rollouts needed learn optimal policy future works include performing analysis properties proposed model identification method expressing conditions inclusion model identification approach reduces needs physical rollouts convergence terms physical rollouts also interesting consider alternative physical tasks locomotion challenges benefit proposed framework eferences erez tassa todorov simulation tools robotics comparison bullet havok mujoco ode physx ieee international conference robotics automation icra bullet physics engine online available mujoco physics engine online available dart physics egnine online available http physx physics engine online available havok physics engine online available sutton barto introduction reinforcement learning cambridge usa mit press bertsekas tsitsiklis programming athena scientific kober bagnell peters reinforcement learning robotics survey international journal robotics research july mnih kavukcuoglu silver rusu veness bellemare graves riedmiller fidjeland ostrovski petersen beattie sadik antonoglou king kumaran wierstra legg hassabis control deep reinforcement learning nature vol online available http peters relative entropy policy search proceedings aaai conference artificial intelligence aaai kober peters policy search motor primitives robotics advances neural information processing systems schulman levine abbeel jordan moritz trust region policy optimization proceedings international conference machine learning blei bach eds jmlr workshop conference proceedings online available http pdf calandra seyfarth peters deisenroth bayesian optimization learning gaits uncertainty annals mathematics artificial intelligence amai vol ijspeert central pattern generators locomotion control animals robots neural networks vol dogar hsiao ciocarlie srinivasa grasp planning clutter robotics science systems viii july lynch mason stable pushing mechanics controllability planning ijrr vol merili veloso akin complex passive mobile objects using experimentally acquired motion models autonomous robots scholz levihn isbell wingate model prior mdps proceedings international conference machine learning icml zhou paolini bagnell mason convex polynomial model planar sliding identification application ieee international conference robotics automation icra stockholm sweden may deisenroth rasmussen fox learning control manipulator using reinforcement learning robotics science systems rss hamrick battaglia griffiths tenenbaum inferring mass complex scenes mental simulation cognition vol chang ullman torralba tenenbaum compositional approach learning physical dynamics review conference paper iclr battaglia pascanu lai rezende koray interaction networks learning objects relations physics advances neural information processing systems ljung system identification theory user upper saddle river usa prentice hall ptr leonard rodriguez shape pose recovery planar pushing international conference intelligent robots systems iros hamburg germany september october agrawal nair abbeel malik levine learning poke poking experiential learning intuitive physics nips fragkiadaki agrawal levine malik learning visual predictive models physics playing billiards iclr ullman goodman tenenbaum learning physics dynamical scenes proceedings thirtysixth annual conference cognitive science society yildirim lim freeman tenenbaum galileo perceiving physical object properties integrating physics engine deep learning advances neural information processing systems byravan fox learning rigid body motion using deep neural networks corr vol finn levine deep visual foresight planning robot motion icra zhang zhang freeman tenenbaum comparative evaluation approximate probabilistic simulation deep neural networks accounts human physical scene understanding corr vol azimi leonardis fritz fall fall visual approach physical stability prediction vol lerer gross fergus learning physical intuition block towers example proceedings international conference machine learning icml new york city usa june pinto gandhi han park gupta curious robot learning visual representations via physical interactions corr vol leonardis fritz visual stability prediction application manipulation corr vol denil agrawal kulkarni erez battaglia freitas learning perform physics experiments via deep reinforcement learning liu turk preparing unknown learning universal policy online system identification corr vol online available http marco berkenkamp hennig schoellig krause schaal trimpe virtual real trading simulations physical experiments reinforcement learning bayesian optimization ieee international conference robotics automation icra singapore singapore may june online available https hennig schuler entropy search global optimization journal machine learning research vol rasmussen williams gaussian processes machine learning mit press brockman cheung pettersson schneider schulman tang zaremba openai gym arxiv preprint duan chen houthooft schulman abbeel benchmarking deep reinforcement learning continuous control icml | 2 |
september multivariate density modeling retirement finance christopher rook abstract prior financial crisis mortgage securitization models increased sophistication products built insure losses layers complexity formed upon foundation could support foundation crumbled housing market followed foundation gaussian copula failed correctly model correlations derivative securities duress retirement surveys suggest greatest fear running money retirement decumulation models become increasingly sophisticated large financial firms may guarantee success similar investment bank failure event retirement ruin driven outliers correlations times stress would desirable foundation able support increased complexity forms however industry currently relies upon similar gaussian lognormal dependence structures propose multivariate density model fixed marginals tractable fits data skewed multimodal arbitrary complexity allowing rich correlation structure also ideal retirement plan fitting historical data seeded black swan events preliminary section reviews concepts used fully documented source code attached making research lastly take opportunity challenge existing retirement finance dogma also review recent criticisms retirement ruin probabilities suggested replacement metrics table contents introduction literature review preliminaries iii univariate density modeling multivariate density modeling covariances multivariate density modeling real compounding return diversified portfolio vii retirement portfolio optimization viii conclusion references data surveys appendix source code keywords variance components algorithm ecme algorithm maximum likelihood pdf cdf information criteria finite mixture model constrained optimization retirement decumulation probability ruin glidepaths financial crisis contact financial security purchased time distributions reinvested yields value time called adjusted price say total return time total compounding return inflation rate times real return time real price time value upon solving yields efficient market real prices governed geometric random walk grw value represents drift expected price increase sufficient compensate investor risk times random walk next value current value plus random normal step best predictor current value exponentiating sides grw model yields alternative form lognormal strict conditions normally distributed step justified decompose time series smaller segments say let independent identically distributed iid random variables rvs compounding real return times also iid rvs compounding real return time central limit theorem clt represent years days compounding yearly return product compounding daily returns lognormal assumption breaks iid ample research indicate correlation returns increases time length decreases also find compounding returns liquid securities used retirement finance often better fit normal probability density function pdf lognormal suggesting short term real compounding returns may iid see complication normal pdf generally considered tractable whereas lognormal pdf example diversified portfolio equities bonds real returns respectively compounding real return equity ratio unfortunately known pdf exists sum correlated lognormal rvs left approximate given see rook kerman implementation despite benefits using normal rvs model compounding real returns finance many practitioners researchers due primarily lack skewness heavy tail also normal pdf generate negative prices spectacular failure gaussian copulas financial crisis reinforces skepticism unfortunately reject normal pdf benefit finance models optimized using research motivated dilemma particularly desire skewed multimodal tractable pdfs model compounding real return diversified portfolio finance applications particular interest claim karl pearson moments lognormal pdf virtually indistinguishable mixture normals mclachlan peel literature review housing boom residential mortgages packaged sold securities price security present value future cash flows mortgage payments products partitioned tranches borrowers defaulted holders suffered first followed midlevel mackenzie spears cash flows timings needed price tranche function loans defaulted time point default times modeled using exponential pdf probability default time returned cumulative distribution function cdf probability simultaneous defaults given times computed copula multivariate cdf depends correlation default times way estimate true correlation default times residential borrowers due lack data suggested translating copula simultaneous defaults equivalent expression using normal rvs correlation rvs pulled measure underlying debt instrument normal assumption reasonable sufficient data exists using correlations gaussian copula return probability simultaneous defaults specific times samples correlated exponential failure times simulated gaussian copula used value security loan pools held mortgages equity tranches acting like stock senior tranches like safe bond low interest rates led excess liquidity produced insatiable appetite pension sovereign wealth funds senior tranches yielded treasurys kachani fatal flaw system economists assumed decades financial data originates regimes correlations change crises hamilton since housing busts follow housing booms unwise hindsight measure correlation one value witnessed defaulttime correlations increase crisis senior tranches sold safe bonds behaved like stock devastated insurers underwritten trillion credit default contracts trillion blame crisis focused gaussian copula salmon takeaway normal returns appropriate finance nocera researchers practitioners warned using normal distribution vindicated paolella subsequently declared race find suitable multivariate pdfs financial applications provides overview mixture densities often used model economic regimes form basis research pdf develop multivariate normal mixture fixed normal mixture marginals tractable used discrete time retirement decumulation models intuitive understand detail needed fit generic univariate normal mixtures sets returns form multivariate pdf add correlations finally derive real compounding return diversified portfolio use within optimal decumulation models supporting proofs derivations full implementation included appendix preliminaries foundational concepts needed density model developed thru presented probability density cumulative distribution functions let continuous function function valid pdf casella berger cdf defined fundamental theorem calculus anton pdf derivative cdf note may defined subset usually depends vector parameters say may represent mean variance common expressions pdf include may also denoted indicate governed written single univariate pdf also applies vector rvs defined multivariate cdf multivariate pdf defined similar univariate case differentiating multivariate cdf yields multivariate pdf marginal pdf one say obtained integrating rvs finite mixture densities let continuous let functions satisfy univariate pdf conditions also let probabilities also satisfies pdf conditions called finite mixture density titterington cdf let rth moment thus vector rvs multivariate mixture pdf satisfies multivariate pdf conditions set forth mixture pdf two distinct interpretations function accurately models pdf originates component density probability components labels parameter estimation unaffected interpretation underlying math parameter estimation adopt interpretation simplifies math component density may depend parameter vector let vector component probabilities iid observations drawn say objective estimate parameters estimated pdf fully specified used interpretation components meaning mixture pdfs model serially correlated data via components component considered state states time assume observation depends prior observation probabilities transitioning states stationary state transitions evolve time markov chain hillier lieberman define gxg matrix conditional probability state time given state time serially correlated data originating mixture pdf thus requires estimation transition probabilities addition interpreted unconditional probability state time used time mclachlan peel states mixture pdf called regimes process dependent observations transition states time termed regime switching hamilton noted underlying math differs interpretations interpretation components labels thus observations mixture pdf viewed coming pairs actual value component generated common replace vector component slot elsewhere time expressed see figure figure mixture data collection components labels note ztj representation applies dependent independent data mixture pdfs dependent data termed hidden markov models hmm rabiner juang state vector generally observed hidden thus critical task hmm model building determining state generated observation interpretation pdf incorporates observed ztg ztg ztg given regimes hamilton time series models research pdfs compactly example suppose returns financial security observed time appear symmetric around overall mean exhibit serial correlation include black swan frequency greater corresponding tail probabilities either normal lognormal pdf taleb intuitive tractable pdf returns tukey contaminated normal pdf huber mixture two normals equal means unequal variances used thicken tail normal pdf density larger variance generates outliers smaller proceed intuitively partitioning returns two sets one holding holding outliers normal pdf fit set using mles example mixture weights set representing returns replacing parameters estimates suppose returns originate common pdf gray swan pdf outliers overall mean estimated see figure figure example tukey contaminated normal pdf labeling components using interpretation note mixture normal pdfs generally normally distributed symmetric exception note caution mistake mixture pdf figure clearly normally distributed practice normal mixture model pdf tractable mclachlan peel example mixture two normals closely approximate observed outliers called gray swan events black swans extreme events occurred taleb discourages use normal pdf refers lognormal pdf bad compromise fama discussed using mixture pdfs explain stock prices taleb also used mixtures practice add heavy tail normal pdf alternative lognormal pdf procedure illustration methods common prior advent algorithm see johnson seen mixture pdfs almost always calibrated today using either algorithm gradient procedure lognormal pdf titterington lastly since mixture pdf figure model black swan events may deemed unsatisfactory solution could add component labeled black swan say small probability adjust central limit theorem clt let large per central limit theorem freund words sum iid rvs pdf approximately normal denoted large samples unfortunately rule always apply mixture pdfs consider let iid sample size unlikely draw observation leaving violation clt ensures sample generated observations originating value within clt pdf mean thus valid value repeating process times produce iid sample size clt pdf values clt pdf caution therefore advised invoking clt rvs mixture pdfs example let daily real index trading amount invested january grow annual real compounding return clt iid definition december real dollars normal per would lognormal making historical collection annual real index returns lognormal random sample hypothesis tested rejected using test rook kerman one explanation daily returns independent claim supported academic studies index returns baltussen another daily returns identically distributed daily returns originate mixture pdf large enough clt approximation density future observation certain assumptions pdf future value derived observed let compounding returns financial security time unknown suppose observed value reflecting unobserved next value note approach considered retirement plan full sample standard daily real compounding return approximated total daily return calculated end value start value value annual inflation rate short term daily index returns historically exhibited positive serial correlation baltussen suggest ubiquity index products may eliminated signal even turned negative sample mean independent rvs ross since also independent thus form follows student degrees freedom ross denoted consequently degrees established sample variance future observation pdf scaled student distribution centered sample mean simulation study would preferred pdf evaluating financial plan simulating student degrees freedom simulate normal distribution first generate random value random value numerator generate square sum additional values construct finally use ratio definition given law kelton pdf future value derived differentiating cdf see cdf future observation given freund gamma function finally pdf future observation derived using chain rule pdf future values derived similarly multivariate pdf future values product univariate pdfs assuming independence note technique described breaks distributions approximations clt involved therefore valid sample size normal numerator denominator must independent rvs ross asset added let compounding returns two uncorrelated financial securities times interest modeling future unobserved value diversified portfolio using securities say follows thus asset suppose quantity exists function known pdf function solved numerator would suggest solution behren problem famous unsolved problem statistics casella berger maximum likelihood estimation let continuous rvs observed value likelihood pdf observed value vector holds unknown parameters likelihood value probability however similar measure since values higher likelihoods likely observed likelihood function pdf written function extending entire sample multivariate pdf evaluated written likelihood entire sample appealing estimate maximizes value denoted called maximum likelihood estimator mle since natural log function increasing maximizing equivalent problems often easier deal latter rvs independent mles possess many desirable statistical qualities consistency efficiency asymptotic normality invariance functional transformations thus often considered gold standard parameter estimation qualities however depend certain regularity conditions satisfied hogg see finding parameter estimates statistics therefore enters purview engineering disciplines specialize constrained optimization techniques see likelihood function iid sample originating mixture pdf depends interpretation see let likelihoods interpretations respectively behren problem tests equal means two normal populations unequal unknown variances let independent samples make fully specified pdf unknown using goal collect data maximize either respect obtaining mle unfortunately interpretation maximized directly component indicator rvs ztg missing observable loglikelihood interpretation maximized directly however function unpleasant variety reasons example multiple local maximums thus finding stationary points guarantee mle desirable properties also unbounded normal components thus given value matter large always find setting see appendix maximizing mixture normal components therefore restrict parameter space region finite search local maximums declaring argmax restrict parameter space mixture pdf let variance component variance ratio constraint max given constant mclachlan peel good choice eliminate spurious maximizers optimal values lack meaningful interpretation occur one component fits small observations algorithm several researchers using process obtain mles studies missing data process observed possess many interesting properties became formalized proofs name dempster become one influential statistics papers ever written procedure termed algorithm generates mles follows let suppose observed missing iid time joint pdf depending marginal pdf may obtained integrating continuous summing discrete see algorithm require iid observations also used estimate parameters hmm models see results likelihood functions one includes missing data one computes mle begin initialize starting values compute maximize end taking expectations wrt use respect using expectations terminate stops increasing use threshold replaces missing constants result taking expectations therefore unknown value decrease iterating end local maximum nearest starting values given bounded region multiple local maximums use variety starting values take argmax value exhibit desirable qualities noted mclachlan peel used find mles wide variety models trick reformulate rvs appear missing applications mixture pdfs straightforward interpretation component indicator rvs ztg missing see multivariate pdf given corresponding marginal pdf used interpretation see discrete rvs likelihood function iid sample mixture pdf including missing data rvs corresponding likelihood without missing data uses expected value notice linear replaces ztg see thus zti computed using available data along current settings since zti discrete equals originates component otherwise given value completely known replaces zti resulting function unknown optimized initial values randomly using simulation mclachlan peel since strategy apply variety starting set may many local optimums values select argmax regularity conditions statistical tests models theorems built upon sets assumptions statistical inference assumptions called regularity conditions hogg appendix describe conditions usually case subset need satisfied given result regularity condition applies pdfs deals uniqueness namely pdf condition clearly holds pdfs since changing mean variance changes distribution however consider mixture pdf vector unknown parameters define note violating regularity condition general mixture pdfs satisfy regularity conditions caution advised using results requires likelihood ratio test lrt let arbitrary statistical model model parameters dropped example may linear regression model holding coefficients corresponding reduced model excludes predictor variables principle parsimony favors statistical models fewer parameters likelihood ratio test lrt checks significant difference likelihood value full model reduced version likelihoods significantly different reduced model fewer parameters preferred lrt tests equivalence likelihoods namely mles respectively test statistic given hogg note since adding parameters model reduce likelihood test statistic close zero since takes large positive value therefore rejected critical value select regularity conditions including see number parameters dropped create test type error probability define type error means rejected true interested using lrt test optimal components mixture pdf however since applicable regularity conditions violated mclachlan suggests approximating null distribution bootstrap hypothesis test data vector data vector estimate pdf mle originates mixture pdf say originates mixture pdf say mle estimate pdf value test statistic computed using corresponding likelihood functions pdfs implicitly assume generated sample vector value distribution simulated generating random sample fitting mixture pdf using mles sample size data vector repeating process times simulate values estimate distribution test approximated matrices let rvs compounding return financial assets given time point cov corr variances covariances correlations matrix written diagonals variances covariances note square symmetric every square symmetric matrix matrix qualify variances correlations must satisfy values must also make statistical sense example following strong positive correlation implies also strong positive correlation turns conditions square symmetric matrix valid matrix met constant vector wothke since condition simply means linear combination rvs variance matrix eigenvalues eigenvalues constants satisfy meyer since determinant singular implies equation solved thus come pairs length referred eigenvector note reveals must otherwise note determinant matrix product eigenvalues since ensures exists guttman matrix determinant thus also correlation allowed matrix two rvs perfectly correlated question needed beyond set elements note since use constants repairing broken matrix matrix said broken occur variety reasons including missing data estimation procedures iterative optimization methods encountered end analysis error repair broken matrix continue take latter approach perform ridge repair wothke ridge added multiplying diagonal constant start increase modified matrix say diagonals entire matrix divided revert diagonals back forces covariances approach along correlations increases diagonal matrix elements scaled matrix since useful derivatives matrices let xnt compounding returns financial assets times xit terms unknown parameters estimated mles collecting data see mles maximize multivariate pdf data multivariate normal pdf multivariate pdf entire iid sample guttman function unknown parameters given mle found maximizing wrt critical point vector derivatives equals would reveal maximum could found iteratively using gradient method also requiring said let aij nxn matrix positions positions let vij matrix positions follows via product rule definition thus denote holds elements determinant searle expressed using cofactor expansion respect row meyer noted iterative optimization methods source broken matrices may step infeasible region vij row column removed thus matrix determinant function aij say chain rule dimensions symmetric let anton using since aij via position row column positions searle serial correlation let rvs compounding return financial asset time unconditional mean variance constants let respectively returns serially correlated assumed iid serially correlated data modeled using time series model autoregressive moving average process model order model order box autoregressive moving average arma model includes terms appropriate model data set identified signature autocorrelation acf partial autocorrelation pacf functions acf lag corr pacf lag correlation remaining accounting lag correlations namely model classic signature pacf cut abruptly acf decay lag pacf classic signature acf cut abruptly pacf decay lag pacf one patterns often emerges perhaps log power transforms usually max nau series fixed said stationary time series drifts either time trend stationary since fixed series often made stationary differencing box usually differences suffice differences referred random walk observation prior value plus random step governed alternative differencing trend fit regression time predictor model stationary residuals define backshift operator bkzt model written characteristic notation introduced since dealing nxn matrices simplifies code new nxn matrix position row column positions introduced taking derivatives polynomial process stationary roots magnitude box model stationary design since cov fixed covariance derived using model definition cov cov cov correlation corr consider process since centered observation rewritten repeating without end yields model thus works otherwise infinite defined alternate form model model characteristic polynomial root hence stationary justifying condition noted fixed conditional variance new observation given prior observations definition model using alternative model form unconditional variance given model imply new value depends prior value fact assumes correlation prior observations reason rarely needed practice see use alternative form note cov cov cov since iid correlation values time points apart corr cov exponential decline model unit root alternative form unconditional variance since stationary parameters model estimated least squares adjustments account serial correlation maximum likelihood equations method moments box likelihood function observed sample multivariate pdf data written function parameters using likelihood matrix diagonals time series mean reverts words future centered values come pdf fixed increases stationary time series thus mean revert due fixed rws mean reversion strength measured speed halflife required time process conditional mean let resulting iff otherwise equals last observed value process tsay mean reversion speed thus strengthens weakens intuitive since damping factor applied prior values fastest mean reverting process random sample without serial correlation implies process generates instantly reverts mean finite pdf process approaches mean revert thus process serial correlation strengthens mean reversion weakens vice versa intuitive since mean reversion strengthens new values increasingly depend mean serial correlation strengthens new values increasingly depend prior value data time series retrieved tested serial correlation see table appendix conclusions draft subject relevant diagnostics box correct significance level type error testing multiple simultaneous hypotheses control error rate type error discussed market efficiency default test formed serial correlation serial correlation evidence needed reject market efficiency liquid assets may suggest profitable arbitrage trade retirement advisors predict path future prices type error occurs reject true falsely reject efficiency security table tests serial correlation time series data returns annual inflation rate real total real small cap equity total small cap equity real total real real gold return process data returns annual cash interest real real cap real small avg real avg total diff shiller cape ratio diff log shiller cape ratio avg real earnings process arma abbreviations random sample geometric random walk risk premium test serial correlation serial correlation test possible serial correlation near unit roots design let avg returns cov similar possible steps unit root test statistics respectively critical value reject hypothesis unit root note since cape ratio grw may plausible drifts grw preliminary model would arma detrended data arma goal test hypotheses family conclusions replicable confidence type error general independent tests type error test probability making type errors binomial thus probability making type errors independent tests tests conducted probability making type errors confident independent tests translates chance replicating family conclusions new data replicate family confidence type error adjust used test bonferonni adjustment uses require independent tests ensure type error westfall adjusted test table conclusions table lead table concerns current retirement finance dogma table questionable claims retirement finance literature claim shiller cape ratio annual used time markets sell cape ratio high buy low relative historical average hypothesis annual cape ratio behaves rejected random walks unit roots mean revert best predictor future value random walk current value drift use logs grw annual returns real total mean revert therefore serially correlated fit using autoregressive model serial correlation mean reversion opposites one strengthens weakens annual returns exhibit serial correlation random samples mean revert shiller cape ratio annual used predict average future linear regression predicting average future returns using cape values highly significant average return strongly serial correlated see table appendix fitting regression line points inappropriate result underestimated variances inflated type error rates commonly lead false claims predictors significant findings preliminary fitted linear models subject appropriate diagnostics see box design let average returns constructed cov cov similar claim made safe withdrawal rates swr retirement via regression cape exact concern arises constrained optimization linear programming linear program optimization problem objective constraints linear functions decision variables lps solved using simplex algorithm standard form decision variables linearly independent constraints jensen bard maximize subject cnxn feasible region objective function aknxn constraints lps come pairs dual equivalent minimization problem primary solved dual solved whereas primary decision variables constraints dual decision variables constraints since min max solved assuming simplex algorithm recognizes linear global solutions must occur corner points feasible constraint binds becomes given values decision variables constraints bind decision variables fixed constraints ways select constraints remaining decision variables must equal corner point ways occur total corner points thus solution simplex algorithm partitions potential mxm full rank let corresponding vector partitions follows reflects decision variables fixed binding constraints objective becomes constant less linear combination decision variables coefficients linear combination negative problem unbounded solution simply increase corresponding decision variable increase objective solution coefficients must occurs decision variables must equal maximize constant less quantity maximized quantity equals finally constraints satisfied basic feasible solution bfs jensen bard simplex algorithm starts corner point feasible region cycles adjacent corner points bases defined decrease thus solved small number evaluations algorithm ends global maximizer note vanish basis formed setting decision variables equal solving remaining variables quantity random stochastic linear program slp solving slps using theory involved kall mayer simulation practical alternative example suppose set rvs since solution function must also say heuristic slp solution obtained generating random values say bki sample yields bfs say xni solution taken since satisfies follows simulated solutions therefore asymptotically satisfy constraint set expectation exceedingly large number problems formulated solved lps programs also closely approximated solution may may unique global feasible region referred polyhedron sits quadrant since decision variables technically incorrect useful visualization tool four people different heights holding flat board objective function plane stop sign laying flat ground feasible region quadrant highest point board inside sign directly corner sign interior point may reflect randomly occurring quantity supply demand temperature sales revenue profit etc quadratic programming objective function form surface maximized quadratic linear resulting optimization referred quadratic program bij said separable objective separates sum variable functions hillier lieberman separable approximated begin convert minimization problem noting max min write function posing issue since minimizing adds constant equivalent problems proceed partitioning equidistant constants define line segments trace values chosen allow replaced weight vector reachable weighted sum constants using namely function approximation sharpens increases adjacent weights used jensen bard see figure iii since objective min adjacent weights must used optimal solution compare blue dot objectives dashed red lines figure iii figure iii separable approximation minimization objective max minimize subject approximated following linear objective function feasible region constraints standard form requires constraints form linear express equality constraint using classical convex programming classical convex program ccp optimization objective either minimizing convex maximizing concave function set linear equality constraints jensen bard function convex iff convex concave max max thus maximizing concave function function equivalent problems since concave lovasz vempala ccp desirable property local optimums global optimums thus problem reduces finding local optimum following formulation interest minimize maximize subject convex objective function concave objective function feasible region aknxn constraints lps redundant constraints removed feasible region empty solution feasible region consists point also solution solution unconstrained solution solves constrained problem likely none hold left optimize rank solve problem locate critical point lagrangian defined incorporates constraints lagrangian new decision variables introduced called lagrange multipliers solution occurs namely system solved using newton method approximates linearly neighborhood solving yields setting approximation repeating process generates iterative solution solution would write rank objective making unconstrained function solving solve replace use determine derivatives convergence occurs symmetric matrix called bordered hessian given implies implies thus removed implies since convex concave rank thus thus must invertible note hessian newton method uses since redundant constraints implies therefore convex concave bordered hessian thus invertible border general programming general program nlp seeks minimize maximize smooth function subject necessarily convex concave generic problems several local optimums goal find best among little said nlp problems general optimization strategy depends nature problem cases lagrangian used find local optimums others metaheuristic tabu search simulated annealing genetic algorithm used hillier lieberman else fails generate random values evaluate constraint set satisfied keeping record optimal value better approach would generate random values satisfy constraints alternatively take random setting infeasible project point inside feasible region evaluate point random starts effective many local optimums exist strategies generating values developed specific problems mixture likelihoods see mclachlan peel copula modeling let continuous pdf cdf copula modeling based fact initially surprises intuitive upon reflection namely uniform proof straightforward let random variables cdf identically distributed cdf uniform consider maximizing generic likelihood function includes matrix constraint variances covariances must alternative discarding point broken matrix repair distribution let rvs compounding return financial securities given time point marginal pdf cdf respectively multivariate pdf cdf respectively see let uniform cdf since valid cdf derivative multivariate pdf namely see note relationship nelson multivariate pdf derived differentiating using chain rule multivariate pdf multivariate copula pdf product marginal pdfs literature refers copula copula density term indicates coupling multivariate marginal pdfs set rvs nelson independent also independent copula term therefore models dependence set rvs breakthrough marginal pdfs modeled separately quality appealing particularly field retirement finance modeling multivariate pdf set real compounding returns marginal return given security depend securities involved reason copula modeling standard multivariate pdf modeling finance many forms proposed example gaussianinduced copula model dependence mapping normal rvs since unknown parameters copula exist likelihood estimated mles building multivariate pdf choose candidates copulas taking one smallest error using empirical copula constructed via empirical cdf research propose generic tractable alternative copula modeling well suited retirement finance assuming marginal pdfs arbitrarily fit data set distribution corresponding cdf selected model dependence structure rvs example housing boom securitized mortgage products modeled default times named residential borrowers exponential rvs see would exponential pdf dependence structure chosen gaussian becomes multivariate pdf requires full specification say induced choice using transformations corresponding marginal cdf element following standard procedure transforming rvs freund multivariate pdf form corresponding pdf chosen dependence structure jacobian term zero since transformation involves one diagonal terms derived follows using rules calculus treat ratio invertible thus copula density thus takes form induced dependence structure cdf gaussian univariate standard normal pdfs copula density used complete multivariate pdf sample data note sklar theorem guarantees pdf marginals using nelson copula parameters estimated mles covariance terms unknown since assumed univariate marginals fully specified tran caution straight forward optimization may work basic copula forms fail using commercial software complicated dependence structure giordoni solve problem similar one address different ways namely using normal mixture copula along normal mixture marginals also adaptive estimator attempts smooth differences normal mixture marginals implied marginals multivariate normal mixture fit directly data information criteria addition lrt fitted models statistics compared using metric called information criteria values quantify information lost model true state nature smaller values preferred indicate less information loss models many parameters said lack parsimony choosing amongst candidate models metrics attempt strike balance likelihood value parameters among widely used metrics akaike information criteria aic calculated akaike aic parameters parameters counted must free model constrained linearly counts parameter since estimating estimates thus parameters total parameters independent constraints term using mle see aic works well interest controlling type errors subsequent hypothesis tests tao tends fit small samples leading models lack parsimony thus corrected aic aicc proposed includes penalty increases parameters hurvich tsai parameters sample size parameters parameters sample size increases penalty decreases thus aicc aic whereas type error reject true type error accept false power hypothesis test reject false interest controlling power subsequent hypothesis tests bayesian information criteria bic recommended tao calculated schwarz bic parameters sample size determining optimal components finite mixture pdf see unsolved problem statistics often used heuristic compare mixture pdfs various sizes titterington caution theory validity often relies unmet regularity conditions probability ruin retirement let time points retirement horizon withdrawal made time last withdrawal time fixed random pmf defined derived using lifetables published individual group rook safe withdrawal rate swr heuristic suggests retirees withdraw real terms savings time point bengen retirement plan often couples withdrawal rate asset allocation securities involved let rti rti total real returns expense ratio proportion allocated security inflation rate respectively time total compounding return security time rti rti total compounding return security time rti rti real compounding return security time denoted rti rti since continuous governed univariate pdf say fti cov rki rsj times securities fti independent time drop time index become respectively marginal pdf security modeled using historical retirement plan succeeds fails based return diversified portfolio single security real compounding return portfolio time consequently function time via portfolio weights asset allocation set time derived linear transform univariate pdf say used retirement plan cases pdf easily derived normal others solution lognormal various methods exist derive pdf transformed one uses multivariate pdf random vector freund goal model using maintaining individual security marginals subject may skewed multimodal generally normal allowing higher pdf moments aid determining plan success failure retirement surveys alternative metrics retiree surveys reveal concern running retiree runs money experiences financial ruin probability event occurring computed shared retiree given decumulation strategy probabilities bounded multiplying yields percentage bounded percentages ubiquitous metric universally understood example probability translates described words coin flip retirees make withdrawals time experience event ruin time denoted ruin iff time withdrawal successful account support completely emptied time withdrawal compliment event avoiding ruin time denoted ruinc occurs iff withdrawal time successful leaving balance define ruin event ruin occurring time let ruinc compliment tools described used detect serial correlation within securities time number referenced conclusion research despite extensively researched probability ruin metric universally accepted criticisms leveled directions common retirement ruin event could occur ruin ruin substantial difference retiree events criticism argues ruin binary outcome simplistic nuanced approach would consider varying degrees failure separate criticism ruin metric overly complicated better left actuaries insurance companies argument metric misunderstood abused financial planners lack ability properly calibrate computation fail understand inherent flaws impact covariances higher order pdf moments retirement strategy varying degrees failure primarily interested compliment ruin event success unlike retirement ruin event retirement success varying time attached decumulation model maximizes probability success also minimize probability ruin equivalent optimization problems model maximizes probability success may fact fail case reasonable try limit damage harlow brown introduce two downside risk metrics precisely approach uses fully stochastic discounting compute retirement present value rpv withdrawals cash flows decumulation plan rpv pdf estimated via simulation values rpv zero indicate account support withdrawals retirement ruin occurred strategy minimizes downside risk recognizes ruin section rpv pdf markedly different retirement plans similar failure probabilities goal make section pdf palatable possible ruin occurs mean standard deviation negative rpv values used minimization metrics optimization corresponding asset allocations found harlow brown report far lower equity ratios optimal context minimizing downside risk finding benefit intuitive generally think retiree bequest distribution spread variance increases equity ratio negative bequest rpv indicates retiree exhausted savings still alive milevsky takes opposing view ruin probabilities routinely abused misunderstood retirement planners advocates replacing altogether different metric namely portfolio longevity since investments volatile measures length time retirement portfolio lasts takes values discrete time pmf defined note successful withdrawals made ruin successful withdrawal made ruin exactly successful withdrawals made ruin finally withdrawals made successfully ruinc retirement success given horizon length therefore ruin ruinc see table iii mean median mode thus functions ruin probabilities flaws inherent construction propagate statistics table iii portfolio longevity pmf corresponding statistics portfolio longevity ruin ruin ruin mean ruin ruin ruin ruin ruin ruin ruin ruin argmax sum probabilities either direction stop corresponding median shown sum exactly median locate maximum probability corresponding mode shown mode table iii applies withdrawal rate ruin implies investment account lasts perpetuity including suggested financial advisors examine event implies retiree outlives savings probability event probability ruin compute using conditional probabilities follows generic sample space ruin retirement success replace sample space distributive property sets probabilities mutually exclusive events summed conditional probability uses fixed horizon length since drop prior step since compute using success probabilities noted definition probability ruin using random time horizon respect portfolio longevity probability said interest derived entirely using ruin probabilities shown probabilities appear computing ruin probabilities assume retiree made successful withdrawals times event ruin occurs rook thus ruinc occurs called ruin factor reflects retiree funded status time equal real withdrawals remaining rook event achieving retirement success using swr thus defined fixed random horizons respectively ruinc retirement success ruinc retirement success recall real compounding return diversified portfolio securities function asset allocation set time follows corresponding success probability fixed assuming independence across time ruin replaces vbl integration success probability random ruin ruin derived also uses events since ruinc ruinc ruin ruinc follows ruin ruinc ruin ruinc ruinc consequently ruin given values security weights time asset allocation terms estimated using simulation approximated recursively dynamic program see rook rook subsequently table iii populated probabilities usually given weights tasked deriving according optimality criteria rook derives weights minimize probability ruin stock bond portfolio using dynamic glidepath rook derives corresponding weights minimize probability ruin using static glidepath solutions assume normally distributed compounding returns noted many financial reject assumption primary purpose research extend models compounding returns noted investment accounts last forever regardless withdrawal rate see rook corresponding venn diagrams ruin occurs consequently event ruin eventually occur infinite time horizon unfortunately lognormal pdf defined values allow compounding returns zero pdf develop assigns probability event compounding returns prices securities take values zero iii univariate density modeling real compounding return diversified portfolio determines success failure retirement strategy assume independence across time use securities table random multivariate pdf real compounding return small cap equities developed research rvs representing returns multivariate pdf diversified portfolio using securities generates time real compounding return portfolio weights set time expenses similar copula modeling first build univariate pdfs respectively multivariate pdf built preserve marginals univariate pdfs fit finite normal mixtures using algorithm random starts variance ratio constraint eliminate spurious maximizers novel procedure introduced find optimal univariate components generally considered unsolved problem statistics forward portion tests components using bootstrapped lrt components maximum univariate components allowed forward procedure ends last significant test components backward portion tests components etc significant difference found ending procedure example backward test yields significant difference optimal components note andersondarling normality tests yield respectively indicating normality assumption rejected securities fact assumed originate univariate normal distributions lost forthcoming analysis univariate pdfs univariate tests components use lrt bootstrap samples sample fits data component mixtures using random starts execution algorithm random starts use values generated nearest fitted mixture data fewer components lrt sample value thus requires executions retirement research uses serially correlated assets account dependence multivariate pdf otherwise valid doubts may raised would include strategies use certain cash equivalents see table univariate pdf figure annual real compounding returns histogram table univariate mixture pdfs annual real compounding returns number components aic aicc bic aic aicc bic aic aicc bic aic aicc bic aic aicc bic abbreviations variance ratio skewness kurtosis see aic aicc bic skewness normal pdf thus kurtosis normal pdf thus see hogg definitions johnson moment details note higher moments difficult interpret multimodal distributions figure lrt sampling distribution testing optimal univariate mixture components annual real compounding returns forward backward figure plots annual real compounding returns index histogram table fits returns univariate normal mixtures components tests optimal components done figure using procedure described displayed values rounded throughout unrounded values used calculations constraint significance level used test evidence lead rejection shown table insufficient evidence reject normality annual real compounding index returns bootstrapped lrt procedure agreement information criteria values aic aicc bic univariate normal pdf appropriate returns also agrees test normality univariate pdf small cap figure annual real small cap compounding returns histogram table univariate mixture pdfs annual real small cap compounding returns number components aic aicc bic aic aicc bic aic aicc bic aic aicc bic aic aicc bic abbreviations variance ratio skewness kurtosis see aic aicc bic figure vii lrt sampling distribution testing optimal univariate mixture components annual real small cap compounding returns forward backward annual real small cap compounding returns plotted figure table fits returns univariate normal mixtures components tests optimal components shown figure vii using procedure described constraint significance level test components yields significant pvalue rejected backward processing begins testing components also significant ending procedure normal mixture pdf therefore found appropriate returns coefficient indicates pdf positive skew indicates positive excess kurtosis implies heavier tail normal distribution fitted marginal pdf evidently skewed multimodal table univariate pdf figure viii annual real total compounding returns histogram table univariate mixture pdfs annual real total compounding returns number components aic aicc bic aic aicc bic aic aicc bic aic aicc bic aic aicc bic abbreviations variance ratio skewness kurtosis see aic aicc bic figure lrt sampling distribution testing optimal univariate mixture components annual real total compounding returns forward backward annual real total compounding returns shown figure viii fit univariate normal mixtures table optimal components found via procedure detailed figure constraint significance level test components yields significant rejected backward processing begins testing components repeat test performed normal mixture pdf thus appropriate returns coefficient indicates pdf positive skew indicates positive excess kurtosis implies heavier tail normal distribution fitted marginal pdf evidently skewed multimodal table univariate pdf summary security small cap table vii full univariate pdf parameterization component note use estimates along historical data reproduce values tests optimal components use large values common defaults forwardbackward testing procedures variance ratio constraint used eliminate spurious optimizers finding mles algorithm reducing value adding new constraints either probabilities means alter pdf shapes example increasing decreasing see figure add constraint simply discard random start violates skewed unimodal pdfs lognormal available optimize constrained objectives interpretation components assigned labels data suggest annual real compounding returns originate one regime small cap equity returns originate namely dominant pdf generates returns including outliers low pdf generates originating high pdf regimes add shoulders mean pdf evidently heavier tailed normal annual real total compounding returns originate regimes dominant pdf generating returns pdf regime generating note returns averaged dominant regime overall historical mean consequently widespread claims current low yields invalidate retirement heuristics rule met skepticism figure univariate mixture pdfs probability weighted component regimes multivariate density modeling covariances multivariate pdf built two steps first dependence introduced without correlations result starting point final step estimating correlations interpretation mixture observations come labeled regimes seen observation multivariate pdf viewed originating combination regimes regimes governing respectively thus regimes govern multivariate pdf parsimonious multivariate pdf may call eliminating combinations produced data goal perform regime selection optimal manner accounting sample size total multivariate pdf parameters must also preserve marginal pdfs derived multivariate regimes mixture pdf interpretation regimes produce observations estimating pdf parameters mixture estimate probability observation given regime let ztg time observation component indicator rvs univariate mixture pdf assume parameters estimated observed value time probability produced component since estimated also estimated quantity computed observation bayes decision rule assigns observation component largest probability considered optimal allocation scheme mclachlan peel let nijk observations regimes rvs respectively using assignment rule probability observation originates given multivariate component estimated true unknown probability see figure figure multivariate regime combinations estimated probabilities note observations trivariate exist cell correlations may change across regimes example correlation may strongly positive one cell strongly negative another see independent multivariate pdf product marginals marginals fit mixtures product multivariate normal mixture yields fitted marginals independence probability multivariate observation given regime product marginal probabilities basis assume independence figure shows multivariate component probabilities estimated data namely dependence multivariate mixture pdfs takes forms component within component component dependence modeled via probabilities within component dependence modeled via covariances must preserve marginals however estimates figure thus infeasible example using probability dominant regime equals table whereas using data figure conway gives conditions guarantee univariate bivariate pmfs enforced probabilities contingency table bivariate pdf interest suggest deriving trivariate pdf multivariate regime selection via linear programs lps goal parsimoniously model component dependence preserving marginal pdfs two approaches presented become initial solutions final step estimating covariances updating component probabilities limit discussion problem hand however methods completely generic easily extendable arbitrarily dimensional problems minimize maximum distance minimax define maximum distance true figure max estimated component probabilities values minimize preserve marginal pdfs would interest namely solving minimize subject max probabilities sum constraints minimizing maximum set minimax objective linear constrained optimization problem formulated solved note constraints redundant since probabilities sum since component thus dropped since maximum set must set elements becomes minimize subject constraints constraints include absolute values however note iff promote parsimony penalize objective cell figure containing observations yields probability ensure occurs needed feasibility constraints nijk penalty arbitrarily large nijk xijk must satisfy constraint suffers penalty final formulation minimize subject nijk xijk xijk constraints known constants solved using techniques solution yielding minimization objective minimum sum squared distances define sum squared distances true figure values estimated component probabilities minimize preserve marginal pdfs interest namely solving minimize subject constraints objective quadratic separable therefore approximated shown decision variable appears one term convex shape similar figure iii horizontal axis range converted replacing quadratic terms connected line segments horizontal axis partitioned contiguous sections value reachable linearly term penalty applied thus becomes subject decision variables minimize approximated nijk xijk xijk replaced constraints known constants note minimization objective ensures adjacent use solve obtain yielding objective lps solved customized current exercise modeling multivariate pdf concepts easily extendable collection securities code supplied appendix models arbitrary number securities purpose twofold find feasible solution initialize final step eliminate many unnecessary multivariate components possible larger problems step eliminate multivariate cells note dependence introduced without estimated covariances multivariate pdfs using solutions product marginals specifically component dependence introduced using data guide components occur together frequency note randomly reordering returns assets separately would change results however would change probability estimates unknown figure sets initial values feasible preserve marginals multivariate density modeling let rvs real compounding return financial securities serially correlated historical data random sample time say xtj xtj rtj rtj real return marginal density modeled normal mixture components multivariate pdf modeled normal mixture cov seen figure defines combination univariate components multivariate pdf maintain fitted marginals must satisfy unknown parameters given historical sample known parameters mles unknown parameters found solving following general nlp see maximize subject eigenj diag uphold marginals matrices multivariate pdf decision variables known constants multivariate mixture maximized respect probabilities elements covariances linear probability constraints maintain marginals covariance constraints ensure matrices see researchers found mixture pdf parameters estimated conditionally first deriving probabilities holding parameters constant estimating remaining parameters holding probabilities constant repeating convergence ecme algorithm one approach makes use actual incomplete loglikelihood see liu rubin mclachlan krishnan technique employ optimizing define indicator function univariate component security exists multivariate component otherwise multivariate pdf optimization step optimize wrt probabilities holding covariances constant maximize subject uphold marginals constraints known constants since linear functions concave log concave function concave sum concave functions concave concave boyd vandenberghe marginals enforced independent linear constraints thus ccp local optimums global optimums critical point lagrangian yields maximum jensen bard derivatives derivatives derivatives wrt lagrange multipliers zero constraints enforced dropping components step optimize wrt covariances holding probabilities constant maximize subject eigenj matrices decision variables known constants maximizing multivariate function respect variance components difficult general nlp may multiple local optimums saddle points zero gradient well boundary optimums gradient searle recommend procedure based good starting point gradient helps inform direction hessian step size levenberg suggests modification newton method iterates adjusts step size climbing angle marquardt derived similar modification iterating considered optimal compromise newton method often diverges gradient converges slowly class techniques based approaches since published see gavin overview designed find estimates models constrained minimization problem searle note also useful finding mles variance components gradient ascent newton method failed optimize reasons stated approach taken namely iterations defined max parameter large random generated iteration select randomly among top performers varying prevents divergence large mimicking gradient ascent small newton method ensures nearest maximum found relative informed start scanning nearby regions better values iterating infeasible region addressed performing ridge repair offending matrix exact derivatives derived wrt covariance terms first derivatives gradient terms covariance terms multivariate mixture pdf securities using chain rule along results derived derivatives scalar second derivatives hessian terms follows derivatives wrt terms case wrt covariances different multivariate components case wrt covariances multivariate component last term depends location diagonal derived piecewise otherwise diagonal diagonal whereas methods approximate hessian approach use exact derivatives supplied multivariate pdf maximized iterating objective across steps iterations begin informed starts iterate stops increasing solutions maximums region around concave hessian negative algorithm ecme approach see liu sun apply newton method algorithm faster convergence see also liu rubin mclachlan krishnan ecme algorithm details eigenvalues border solutions may exist regions note eigenvalues computed matrix unstable used sparse matrices containing extremely large small diagonal entries often hessian derived may appear problematic via inspection however real symmetric thus eigenvectors orthogonal condition theory give bounds accuracy eigenvalues hessian subject error matrix column eigenvectors denoting euclidean norm condition matrix calculated eigenvalues suspect meyer select pdf min aic across informed starts conduct analysis surface properties maximizer considering also whether spurious multivariate pdf approach used estimate probabilities covariances multivariate pdf result multivariate mixture pdf see table viii supplements univariate pdf estimates table vii univariate regime labels propagate multivariate pdf without doubt many disciplines would discard solution spurious maximizer since components model observations much improvement multivariate normal via finance however differs industries primary focus studying risk extreme events drive risk instead discarding outliers finance assigns labels gray black swans taleb conventional wisdom suggests example outliers cause bank fail retiree experience financial ruin thus must accounted also reason normal distribution may rejected financial research model accounts risk either explicitly modeling low outliers adds density tail modeling high outliers shifts dominant regimes left along tails application latter occurs tradeoff either approach within regime variance shrinks observations separated kurtosis indicates whether mixture pdf heavier tailed normal pdf described aside predictive modeler may accuse memorizing training data suggest always possible model sufficient complexity models poor prediction best practice predicting partition data sets train models set best using report results applying chosen model unfortunately insufficient data use practice annual historical returns finance lastly using information criteria aic multivariate mixture table viii free parameters superior multivariate normal free parameters since solution would tighten marginal constraints lower variance ratio add constraints means probabilities table viii full multivariate pdf parameterization component det note use estimates table vii historical data reproduce multivariate values table viii estimates generated using procedure converged iterations step required step required respectively optimization convex converge global maximum starting solution instantly requires ridge repairs lands region final matrix repair occurs procedure finds concave region methodically climbs hessian matrix condition begins slowly increases ending perhaps revealing numerical instability hessian eigenvalue turns positive step ends saddle point step ends nearby boundary solution since borderline constraint enforce require matrix eigenvalues determinants table viii reveals det threshold note value driven higher solution lowering threshold however condition becomes large result unstable minor rounding produces non matrix nature border solution covariance rvs using law total expectation thus multivariate mixture pdf using multivariate pdf similar values common retirement research likely derived using unbiased sample estimator correlations correlation rvs derived corresponding sample mles produced iterating subject constraints maintain marginals lps solved set initial pdf structure thereby introduce dependence reason general mles variance components biased degrees freedom discounted estimated means procedure restricted maximum likelihood reml corrects estimating removing means unlikely perfectly align since variances covariances defined expectations averages skewed extreme values small positive estimate masks fact years correlation negative regimes another strongly positive regimes mixture modeling thus uncovers insights previously known within component used derive correlations witnessed financial crisis extreme value correlations invalidate models simple dependence structures hold times stress hindsight many blame crisis gaussian copula failure accurately model failure time correlations derivative securities duress expect simple structures copula family perform better retirement research increasingly advocates use complex instruments improve outcomes coupled near universal use gaussian lognormal copula model dependence reveals situation sounds simulating multivariate mixture pdf let rvs compounding return financial securities given time point multivariate normal mixture pdf simulating values process first generate uniform random value say determine component observation regime else observation regime next generate value selected regime say covariances apply decorrelating transformation recall eigenvalues eigenvectors satisfy see also eigenvectors matrix orthogonal thus let make linear transform diagonal matrix variances independent normally distributed simulated individually sample retirement plan let rvs compounding return financial securities given time point developed using historical sample accounts gray swans observed outliers defined evaluating retirement strategy using multivariate pdf produce black swans unobserved outliers subjective determination accomplished seeding historical data extreme events note model proposed fit data whereas normal lognormal pdf proposed retirement strategy subjected problem avoided backtesting given strategy retirement start year bernoulli highly correlated nearby years correlation rarely ever accounted retirement research real compounding return diversified portfolio assume retiree holds securities let price value security holdings times respectively time total return security compounding return real compounding return security inflation rate times solving yields real price value security holdings time security includes expense ratio paid cost time price value compounding return real compounding return adding holdings denote total account values times respectively total return account times compounding return time real compounding return account must satisfy real account value time combining proportion invested security time asset allocation proves modeling returns rvs serially correlated time index dropped however remains function time asset allocation real compounding return portfolio using let annual expenses small cap equities define asset allocation set time applying real compounding return linear transform namely expenses asset allocations constants decision variables using means variances table vii probabilities covariances table viii let univariate pdf time satisfies cdf define sample space event mutually exclusive normal pdf following mean variance pdf univariate gaussian mixture pdf derived various pdfs using shown figure xii feature asset allocations dominant one securities skewed evidently mostly normal pdf approaches univariate shape proportion security increases see figure xii pdf real compounding return diversified portfolio notes expenses asset allocations minimum variance portfolio occurs asset allocation marginal distributions set thus matching small cap set thus matching set thus matching vii retirement portfolio optimization dynamic retirement glidepaths asset allocations adapt time changes either retiree funded status market conditions whereas static glidepaths fixed allocations retiree set forget glidepaths often considered context safe withdrawal rates swr see optimal glidepath outperforms others respect measure rook derive dynamic static glidepaths minimize probability ruin using swr respectively models use normally distributed returns assumption many practitioners researchers reject due lack skewness heavy tail purpose research extend models returns skewed generic complexity multivariate pdf develop also allows retirement plan using data seeded extreme events see optimal dynamic retirement glidepaths dynamic glidepath minimizes probability ruin using swr derived rook via dynamic program fixed random time horizons applies individual group value function defined dimensions discretized see construct corresponding grid manages solved cdf tractable lognormal returns cdf intractable although methods exist approximate see rook kerman one implementation derived annual real compounding return diversified portfolio using small cap equity returns cdf function normal cdf considered tractable near exact approximation routines readily available source code solve portfolio problem supplied rook example swr plan using small cap stocks bonds optimized using incorporating additional securities optimal dynamic glidepath problem optimal static retirement glidepaths expressions probability ruin supplied fixed random time horizons minimizing probability ruin maximizing probability success equivalent optimization problems using ruin fixed time horizon optimal static glidepath found maximizing respect asset allocation solved portfolio rook using gradient ascent newton method fixed random time horizons assume used since probability success function derivative wrt ruin derivatives wrt ruin derivatives wrt ruin term sum computed arbitrary level precision see rook includes relevant source code corresponding dps would use cdfs univariate normal mixture pdfs also generated using simulation estimates expressions viii conclusion retirement decumulation models increase sophistication financial firms may guarantee success retiree could pay percentage funds remaining death decumulation models statistical based assumptions incorrect render model unsound quantitative mortgage products sold housing boom priced using generated simulating default times gaussian copula hindsight normal assumption incorrect correlations change crisis since housing booms followed housing busts model assumptions incorporated economic regimes economy transitions pensions defined contribution plans quantitative retirement products proliferating present industry built gaussian lognormal foundation also fails incorporate regimes crises modeling returns correlation purpose research develop multivariate pdf asset returns suitable quantitative retirement plans model fits set returns however curse dimensionality limit number securities propose multivariate mixture fixed mixture marginals using normal components model motivated claim lognormal pdf virtually indistinguishable mixture normals whereas lognormal pdf intractable regard weighted sums normal mixture lognormal pdf justifiable returns iid pdf given sample size typical retiree could endure several market crashes expect historical sample represent possible extremes stress test retirement plan subjecting return pdf fit historical sample seeded black swan events normal lognormal pdf unhelpful regard neither accommodate outliers univariate multivariate pdfs developed fit historical returns closely valid criticism models memorize training data project poorly future adjusting variance ratio constraint bootstrapping marginal lrts loosen fit used relatively high values larger variance ratio constraints lrt lead marginal peaks components since multivariate pdf maintains marginals fitting marginals propagates multivariate pdf user sets values desired fit multivariate pdf steps first generic mixture marginals derived using algorithm second multivariate pdf structure set using lps number multivariate regimes pruned penalizing objective includes components data lastly covariances added probabilities updated using ecme approach split convex general nlp optimizations nlp use approach simulates step size line search parameters iterating lastly linear transform multivariate pdf forms real compounding return diversified portfolio incorporated optimal discrete time retirement decumulation models using static dynamic asset allocation glidepaths references akaike hirotugu new look statistical model identification ieee transactions automatic control vol anton howard calculus analytic geometry edition john wiley sons new york baltussen guido sjoerd van bekkum zhi indexing stock market serial dependence around world working paper series https bengen william determining withdrawal rates using historical data journal financial planning vol border wanted know quadratic forms california institute technology http box george gwilym jenkins gregory reinsel time series analysis forecasting control edition englewood cliffs boyd stephen lieven vandenberghe convex optimization cambridge university press new york casella george roger berger statistical inference wadsworth series pacific grove conway deloras multivariate distributions specified marginals stanford university technical report dempster laird rubin maximum likelihood incomplete data via algorithm journal royal statistical society vol fama eugene behavior prices journal business vol iss freund john mathematical statistics edition englewood cliffs new jersey gavin henry method least squares problems duke university http guttman irwin linear models introduction wiley series probability mathematical statistics new york hamilton james new approach economic analysis nonstationary time series business cycle econometrica vol harlow keith brown market risk mortality risk sustainable retirement asset allocation downside risk perspective journal investment management vol hillier frederick gerald lieberman introduction operations research edition new york hogg robert joseph mckean allen craig introduction mathematical statistics pearson prentice hall upper saddle river huber peter john tukey contributions robust statistics annals statistics vol hurvich clifford tsai regression time series model selection small samples biometrika vol jensen paul jonathan bard operations research models methods john wiley sons hoboken johnson norman samuel kotz balakrishnan continuous univariate distributions volume edition john wiley sons new york kachani soulaymane housing bubble financial crisis united states causes effects lessons learned industrial economics ieor lecture notes columbia university kall peter janos mayer stochastic linear programming models theory computation edition springer series operations research management science new york law averill david kelton simulation modeling analysis series industrial engineering management science new york levenberg kenneth method solution certain problems least squares quarterly applied mathematics vol david default correlation copula function approach journal fixed income vol liu chuanhai donald rubin ecme algorithm simple extension ecm faster monotone convergence biometrika vol santosh vempala fast algorithms logconcave functions sampling rounding integration optimization proceedings annual ieee symposium foundations computer science mackenzie taylor spears formula killed wall street gaussian copula modeling practices investment banking social studies science vol issue marquardt donald algorithm estimation parameters journal society industrial applied mathematics vol mclachlan geoffrey bootstrapping likelihood ratio test statistic number components normal mixture appl vol mclachlan geoffrey thriyambakam krishnan algorithm extensions wiley series probability statistics new york mclachlan geoffrey david peel finite mixture models wiley series probability statistics new york meyer carl matrix analysis applied linear algebra society industrial applied mathematics siam philadelphia milevsky moshe time retire ruin probabilities financial analysts journal vol nau robert notes arima models duke university https nelson roger introduction copulas edition springer series statistics new york nocera joe risk management new york times magazine http paolella marc multivariate asset return prediction mixture models european journal finance vol iss giordoni paolo xiuyan mun robert kohn flexible multivariate density estimation marginal adaptation working paper series https rabiner lawrence juang introduction hidden markov models ieee assp magazine rook christopher minimizing probability ruin retirement working paper series http rook christopher optimal equity glidepaths retirement working paper series https rook christopher mitchell kerman approximating sum correlated lognormals implementation working paper series https ross sheldon introduction probability statistics engineers scientists edition elsevier new york salmon felix recipe disaster formula killed wall street https schwarz gideon estimating dimension model annals statistics vol searle shayle george casella charles mcculloch variance components wiley series probability mathematical statistics new york taleb nassim black swan random house new york tao jill mixed models analysis using sas system sas institute cary titterington smith makov statistical analysis finite mixture distributions wiley series probability mathematical statistics new york tran paolo giordani xiuyan mun robert kohn mike pitt estimators flexible multivariate density modeling using mixtures journal computational graphical statistics vol tsay analysis financial time series edition john wiley sons hoboken westfall peter randall tobias dror rom russell wolfinger yosef hochberg multiple comparisons multiple tests using sas sas institute cary wothke werner testing structural equation models chapter nonpositive definite matrices structural modeling sage publications newbury park data sources inflation rate cash interest federal reserve bank minneapolis consumer price index link https accessed december real total real total real aswath damodaran updated january historical returns stocks bonds bills united states link http accessed december download http real small cap equity total small cap equity roger ibbotson roger grabowski james harrington carla nunes september stocks bonds bills inflation sbbi yearbook john wiley sons link http accessed december yearly shiller cape ratio january earnings january robert shiller online data robert shiller stock markets cape ratio link http accessed december download http gold returns kitco metals historical gold prices gold london fix dollars link http accessed december download http retiree surveys steve vernon december top retirement fears tackle cbs news moneywatch link http accessed january lea hart october american biggest retirement fear running money journal accountancy link http accessed january robert brooks july quarter americans worry running money retirement washington post link https accessed january prudential investments perspectives retirement retirement preparedness survey findings prudential financial link https accessed january emily brandon march baby boomers reveal biggest retirement fears news world report link http accessed january appendix source code appendix proof unbounded likelihood normal mixture pdf let mixture pdf suppose iid sample size observed normal component pdfs given vector unknown parameters defined obtain arbitrarily large value take single component dedicate one observation example consider component observation let arbitrarily small number set value recall initialize begin guaranteed increase iteration therefore choosing initialize arbitrarily large number making unbounded solution however meaningful dedicates component single observation similarly component trapped fitting small number closely clustered observations also leads high likelihood value due small variance referred spurious maximizer solution meaningful identified removed mle avoid manually evaluating spuriousness impose variance ratio constraint noted prevents single variance becoming small eliminates problems noted mclachlan peel appendix diagnostic plots time series diagnostic plots time series analyzed table presented time series includes plot uncentered raw observations well acf pacf lags total data points series box annual values used therefore series differenced averaged sources data found data sources section located references main paper test formed serial correlation serial correlation provided pacf plot preliminary conclusions behavior process supplied table followed discussion appropriate account multiplicity security returns annual compounding inflation rate real compounding total compounding real compounding small cap equity total compounding small cap equity real compounding total compounding real compounding real compounding gold return cash compounding interest real risk premium real cap risk premium real small risk premium avg real compounding avg total compounding diff shiller cape ratio diff log shiller cape ratio detrended avg real earnings appendix source code application accepts input files control file settings text file returns samples shown control file sets parameters assets time points respectively parameter sets random starts per cpu core execution algorithm fitting univariate pdfs value multiplied less components mixture fit example fitting univariate mixture setting results random since machine contains cores mixture pdf fit using algorithm random starts parameter maximum components univariate mixtures set implementation parameter samples use bootstrapped lrt within proposed framework determines optimal univariate mixture components asset final values lrt respectively returns file contains column compounding returns asset computer control file shown reproduce multivariate pdf minutes univariate multivariate parameter estimation results written screen file named folder supplied user upon application launch files must also exist within directory lastly folder contain error files issues encountered optimization must exist specified user using global constant errfolder also defined header file save time processing univariate multivariate pdf multithreaded implementation algorithm launches random start separate thread random starts end local optimums ecme algorithm takes small large step general direction steepest ascent maximizing multivariate function respect covariance parameters see line search stepping parameters simulated values yield largest increase used also multithreaded note ecme implementation uses actual incomplete loglikelihood function conditioning first probabilities holding covariances constant covariances holding probabilities constant steps application uses boost http lpsolve http eigen http external libraries freely available download terms conditions described given sites code consists header file functions provided gnu general public license see http full details copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename summary header file included function call define needed libraries global constants inline functions function prototypes function prototypes see headers attached code definition description purpose parameters inline functions getndens supplied normal density function evaluated given point without constant getmdens supplied normal mixture density function evaluated given point without constant getpost probability given value belongs given component supplied normal mixture density getunimean mean given set observations getunistd standard deviation mle given set observations getllval value given observation set supplied univariate mixture density getvratio ratio variance supplied set standard deviations getemmean mean update algorithm along quantity needed algorithm variance update getmvndens supplied multivariate normal density function evaluated given vector values random variable getmvnmdens supplied multivariate normal mixture density function evaluated given vector values random variable getcovs accept multivariate normal mixture density return vector covariances starting element component element etc setcovs accept vector covariances insert values single component matrix specified user getcofm accept matrix convert one determinant equal cofactor matrix respect getidm accept empty square matrix populate identity matrix size chksum accept multivariate normal mixture density check component likelihood zero time points showvals display parameters means standard deviations component probabilities supplied multivariate mixture density standard output pragma include libraries include include include iostream include iomanip include string include fstream include random include include include include include include include using namespace std global constants const string const string const string const string const long double const long double const long double const long double const long double const long double name parameter control input file name input data file name output file folder written files mandatory constant representation root square root variance ratio constraint threshold convergence criteria large negative value use invalid value indicator arbitrarily large positive value const const const const const const const const const const const const const const long double long double long double long double int int int long double int int int int int int minimum multiple performing ridge repair broken matrix maximum multiple performing ridge repair broken matrix threshold minimum eigenvalue ensure positive definite matrix threshold determinant ensure positive definite matrix maximum iterations allowed per single optimization large constant use objective function enforce feasibility constraint discretization level separable quadratic objective function solvelp minimum additive factor use hessian stepping maximum hessian steps per thread set ecme stepping multiply cores determine total threads used hessian stepping set ecme stepping beats randomly select ecme algorithm step set ecme stepping debug level output window details higher details processor cool time iterations msec value seconds number outer loops ecme processing times repeat ecme procedure inline functions inline long double getndens const long double val const long double const long double std return exp pow inline long double getmdens const long double val const int const long double inmdist long double int inmdist getndens val inmdist inmdist return dval inline long double getpost const long double val const int const long double inmdist const int cid return inmdist cid getndens val inmdist cid inmdist cid val inmdist inline long double getunimean const int const long double long double int return inline long double getunistd const int const long double const long double long double int pow return sqrt inline long double getllval const int const long double const int const long double inmdist long double long double int log getmdens inmdist return llval inline long double getvratio const int const long double stds long double int stds minstd stds maxstd maxstd stds return pow inline long double getemmean const int const long double const long double pprbs const long double cprbs long double ssq long double atrm ssq int ssq return cprbs inline long double getmvndens const eigen vals const eigen const eigen vcmi const long double sqrdet const long double picnst return exp vcmi inline long double getmvnmdens const int ucells const eigen vals const eigen mns const eigen vcmis const eigen prbs const long double sqrdets const long double picnst long double int ucells getmvndens vals mns vcmis sqrdets picnst return dval inline void getcovs const int ucells const eigen invcs eigen indvarsm int int ucells int int invcs int int invcs indvarsm inline void setcovs const int ucell eigen invcs const eigen indvarsm int int int invcs ucell int invcs ucell int int invcs ucell int int invcs ucell invcs ucell invcs ucell ucell inline void getcofm const int numa const int inr const int inc const eigen ine eigen inejk inejk int numa int numa inejk else inejk inline void getidm eigen int int int else inline void chksum const int const int nucmps const long double fvals int nucmps long double int fvals tmpsum cout endl error unique component likelihood zero time point endl eliminate corresponding component probability stage objective function endl component probability treated constant moved rhs constraint vector endl eliminated objective function code yet implemented endl exiting ecmealg endl exit inline void showvals const int const long double const int const long double inmdist ostream int ovar endl int ovar string prob inmdist mean inmdist inmdist endl function prototypes int fitmixdist const int const int const long double const int maxcmps const int nsmpls const int nstrts const long double long double fnlmdst string rdir void getrvals const int const int const long double inmdist long double rvls void getrprbsstds const int const long double const int long double prbs const long double mns long double stds void emalg const int const long double const int long double prbs long double mns long double stds long double llval const long double inmdist int rprms int ecmealg const int const long double const int numa const int nucmps const eigen cmtrx const eigen cvctr long double muprbs eigen mumns eigen muvcs int ucellids const string rdir long double thrdemalg const int const long double const int const int ing const long double inmdist const int outg long double outmdist void mapcells const int totcells const int numa int incellary const int curast const int incmps int cid int tmpary int getcell const int incellary const int totcells const int cmplvls const int numa void asgnobs int inary const int const long double const int const long double inmdst void getcor const int const long double const int asgn const int incellary const int cellid eigen eigen const int vcell int solvelp const int totcells const int numa const int incellary const int cmps const long double prbs const int ncellobs const long double cellprob double outprbs const int type void getcmtrx const int totrows const int nucmps const int sol const int numa const int ncmps const int vcids const int incellary const long double prbs eigen flhs eigen frhs long double gethesse const int const int inucmps const long double infvals const long double indnoms const eigen incmtrx const eigen incvctr eigen inhess eigen inlhs eigen inrhs long double gethessm const int eigen const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs const eigen mumns const eigen const eigen einv const eigen ina eigen inhess void getgrade const int const int inucmps const long double infvals const long double indnoms const eigen inlhs const eigen inrhs const eigen indvars eigen ingrad void getgradm const int eigen const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs eigen mumns const eigen const eigen einv const eigen ina eigen ingrad int long double getlfvals const int const int numa const int nucmps const eigen const eigen uprbs const eigen inmns const eigen invcis const long double insqs const long double inpicst long double denoms long double lfvals void wrtdens const string typ const int nucmps const int ucells const long double muprbs const eigen mumns const eigen muvcs ostream ovar void stephessm int long double const eigen eigen indvars const eigen ingrad const eigen inhess const eigen uprbs const eigen inmns const eigen invcs int ridgerpr const int ucell eigen long double mult copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function main summary function defines entry point console application drives analysis using major sections section contents control file see global constant cfile header program data file see global constant rfile header program read stored variables section build univariate mixture pdfs asset using algorithm random starts bootstrapped likelihood ratio test determining optimal components asset section combine univariate mixtures multivariate mixture pdf without disturbing marginals without correlations point multivariate mixture pdf estimated dependence without regime correlations multivariate mixture densities stage one type solved objectives building multivariate mixture pdf minimax minimum squared distance section use ecme type algorithm estimate correlations refine component probabilities maximum likelihood objective step iterative procedure similar algorithm random starts used step covariances estimated step repeated necmes times optimal multivariate mixture pdf written output file steps completed necmes multivariate mixture densities fixed marginals one chosen based criteria higher likelihood higher information criteria final step left user use aic decision would account likelihood value well total parameters pdfs values written output file see global constant ofile header program along univariate mixture pdf details inputs input arguments processed function critical inputs supplied via file see global constant cfile header program data supplied via see global constant rfile header program addition inputs set global constants header file outputs function writes details fitting univariate multivariate mixture pdfs supplied assets screen file see ofile header program include int main int argc char argv local variables string rootdir int nboots ntpoints nassets nrstarts mcomps ucell long double alpha long double ofstream fout ensure folder provided set header file errfolder cout error folder provided problems found optimization written files folder endl use global variable errfolder header file set destination endl exiting main endl exit retrieve directory location setup files cout enter directory setup files reside endl cin rootdir boost rootdir cout endl read control file contains asset classes timepoints return data random starts maximizing likelihood univariate mixture multiple independent processing units components value independent processing units use random starts mixture random starts mixture random starts mixture maximum components appropriate data set fitting univariate mixture densities marginals sample size use bootstrapping lrt test statistic lrts components using selection algorithm alpha used forward selection alpha used backward selection ifstream getparams getparams nassets ntpoints nrstarts mcomps nboots alpha alpha else cout error could open file rootdir cfile endl exiting main endl exit instantiate arrays hold means standard deviations proportion weights mixtures ncomps new int nassets array hold components asset rtrn new long double nassets one return per asset time point time ntpoints prob new long double nassets one prob per asset component component sizes mcomp mean new long double nassets one mean per asset component component sizes mcomp stdev new long double nassets one stdev per asset component component sizes mcomp asgnmnt new int nassets one assignment per asset time point time ntpoints int nassets ncomps start component asset rtrn new long double ntpoints array returns asset returns rtrns rtrns etc asgnmnt new int ntpoints array component assignments asset density read returns file column returns asset store array ifstream getrtrns int ntpoints int nassets getrtrns rtrn ntpoints cout error file rootdir rfile ntpoints rows returns nassets assets fewer endl exiting main endl exit else cout error could open file rootdir rfile endl exiting main endl exit build large array temporarily hold optimal distribution asset int optmdst long double mcomps calculate component probability mean standard deviation normal mle version note mle standard deviation divides biased estimator int nassets initialize probabilities large array zero int mcomps optmdst find best fitting mixture distribution asset ncomps ntpoints rtrn mcomps nboots nrstarts alpha optmdst rootdir assign observation component likely one using bayes rule asgnobs asgnmnt ntpoints rtrn ncomps const long double optmdst build arrays hold optimal solution transfer arrays prob long double ncomps mean long double ncomps stdev long double ncomps int ncomps prob mean stdev write assignment observation time point corresponding components debug mode dbug cout endl string endl assignment observations time point set components using bayes decision rule endl string int ntpoints cout endl time setfill setw long long ntpoints int nassets cout asset asgnmnt cout endl endl mixture distribution fit assets assemble multivariate density start computing total cells need mapped dealing cube also compute total components summing across assets int int nassets ncells ncells ncomps totcmps totcmps ncomps initialize variables int int ncells int nassets int ncells allcells new int nassets call function map cell cube single list value mapcells ncells nassets allcells ncomps cellid tmpvals derive unique cell time point count obs per unique cell int int nassets int ntpoints int ncells long double long double ncells int ncells cellprb long double cellcnt dbug cout endl endl string endl assignment time point cell endl string endl int ntpoints dbug cout time setfill setw long long ntpoints int nassets tmpcombo cellasgn const int allcells ncells tmpcombo nassets cellcnt cellasgn cellasgn cellprb cellasgn cellasgn long double ntpoints build solve corresponding determines structure multivariate density using minimax minimum squared distance objective build array cell ids probabilities attached using method cells must derive correlation ensuring resulting matrix solvelp type defined become index values arrays hold results methods compare select better performer minimax objective minimum sum squared distances objective string type type minimax type minimum sum squared distances ssd int numcmps int ctr double double eigen eigen eigen eigen cout endl string endl running type type lps set initial multivariate density structure endl string endl endl int use approximate structure multivariate density estprbs double ncells numcmps ncells nassets const int allcells ncomps const long double prob cellcnt cellprb estprbs valcids int numcmps identify store unique cells probabilities int ncells estprbs valcids retrieve effective constraint matrix applies solution full row rank ensure rows components getcmtrx numcmps nassets ncomps valcids const int allcells const long double prob mlhs mrhs cout type done numcmps unique cells probabilities using type endl assemble multivariate densities using minimax minimum squared distance objectives starting points select one better fit iterating ecme algorithm convergence step replaced convex programming problem better fit means higher likelihood int int necmes int necmes long double long double necmes eigen eigen necmes eigen eigen necmes int necmes initialize arrays proper size based components resulting lps mprobs long double numcmps mmeans eigen numcmps mvcs eigen numcmps vcids int numcmps initialize type cell specific mean vector matrix appropriate size assets int numcmps mmeans nassets mvcs nassets nassets vcids populate probabilities means initial matrix valid cell elements covariances set zero stage int numcmps mprobs ucell int nassets mmeans allcells ucell int nassets mvcs stdev allcells ucell else mvcs debugging mode write corresponding parameters distribution initial values improving covariances dbug wrtdens type numcmps vcids mprobs mmeans mvcs cout improve initial estimate using ecme algorithm stop multivariate likelihood maximized cout endl string endl ecme algorithm type initial solution endl string endl endl fout endl string endl ecme algorithm type initial solution endl string endl endl numcmpsf ntpoints const long double rtrn nassets numcmps mlhs mrhs mprobs mmeans mvcs vcids rootdir write final density output file display user contains maximum likelihood estimates wrtdens type numcmpsf vcids mprobs mmeans mvcs fout wrtdens type numcmpsf vcids mprobs mmeans mvcs cout cout endl done processing type initial solution final density shown written file endl free temporary memory allocations int necmes delete mprobs mprobs delete mmeans mmeans delete mvcs mvcs delete vcids vcids int delete valcids valcids delete estprbs estprbs delete mprobs delete mmeans delete mvcs delete valcids delete mlhs delete mrhs delete estprbs int ncells delete allcells allcells delete allcells delete tmpvals delete tmpcombo delete cellasgn delete cellcnt delete cellprb int delete optmdst optmdst int nassets delete rtrn rtrn delete asgnmnt asgnmnt delete prob prob delete mean mean delete stdev stdev delete optmdst delete rtrn delete asgnmnt delete prob delete mean delete stdev delete ncomps delete numcmpsf exit cout endl done return hit return exit endl copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function fitmixdist summary function fits univariate normal mixture density set observations using algorithm within iterative procedure tests optimal components first density fit using standard maximum likelihood estimates tested mixture density fit using algorithm user specified large random starts note random starts specified parameter input control file multiple processing units computer running application addition value multiplied components uses increasing random starts densities components fitting particular density one largest likelihood value random starts obeying variance ratio constraint set header file using global constant stdratio selected estimator hypothesis test conducted using forward alpha significance level set user input control file parameter likelihood ratio test statistic lrt used derived maximized likelihood values respectively observed lrt statistic value compared critical value lrt distribution yields area right equal forward alpha distribution lrt statistic approximated via bootstrapping see mclachlan assuming mle fit reflect true distribution data random samples distribution generated univariate mixture density fit sample together generate single null value true null distribution lrt statistic assuming true null distribution fit mles using large samples yields set values used approximate null distribution lrt statistic corresponding critical value rejecting based forward alpha rejected univariate mixture density becomes new testing basis forward procedure continues test components otherwise rejected forward procedure testing basis remains density tested exact manner key bootstrapping lrt statistic null distribution assume density mle parameters true null density random samples generated forward procedure continues manner testing components components max components specified control file parameter maximum components set forward procedure conduct hypotheses tests sequentially alternatives components components components components respectively forward procedure end certain mixture components suggested optimal followed backward procedure tests univariate mixture density less component repeats test rejected ending procedure forward procedure ends suggesting components needed properly fit data backward procedure begins testing components components using backward alpha specified user control file parameter note forward procedure conducts series tests maxcmps maxcmps maximum components consider application set parameter control file components respectively therefore forward procedure ends optimal univariate mixtures fit estimates components maxcmps backward processing used generate random samples needed approximate null distribution lrt hypothesis rejected procedure ends components considered optimal given observation set else rejected backward procedure basis becomes components since significantly different components applying principle parsimony case backward procedure continues testing components components test rejected mixture density components becomes new basis procedure continues testing components components test rejected components considered optimal procedure ends backward portion continues manner test rejected consider example maximum components specified user control file parameter forward alpha parameter backward alpha parameter following details possible sequence testing events illustrates procedure iterates test alpha type forward forward forward forward forward forward forward forward forward backward backward backward result accept accept reject accept accept reject accept accept accept accept accept reject final result mixture considered provide best fit based forward backward alpha values significance levels forward processing ended component mixture producing best fit backward processing found significant difference components test performed forward processing difference components test also performed forwared processing backward test components performed forward processing smaller alpha rejected rejected using larger alpha backward processing tuning alphas user customizes procedure favor components inputs asset processed ranges thru total asset used debugging displaying results output window output file total observations time points current application specified user via control file parameter array size holding time point asset currently processed maximum components allow univariate mixture distribution fit function specified user via control file parameter maxcmps bootstrap samples use lrt approximating statistic null distribution specified user via control file parameter nsmpls random starts use finding estimates specific mixture density specified user via control file parameter note value taken multiple cores also increased multiple components nstrts array size hold forward backward alphas respectively lrts empty double array hold fitted univariate mixture distribution results applying procedure detailed array indexed component refers component probability refers component mean refers component standard deviation fnlmdst string output directory final univariate mixture density components considered written output file specified global constant ofile rdir outputs function populates empty array supplied via parameter optimal univariate mixture density fit procedure detailed number components density returned function call include int fitmixdist const int const int const long double const int maxcmps const int nsmpls const int nstrts const long double long double fnlmdst string rdir local variables int sas strt end incr cbbsol curopt tstid nbootsadj pvalcntr fprms cmpord string tmptxt long double lrt long double long double maxcmps long double maxcmps long double maxcmps long double maxcmps long double long double long double long double long double long double long double vector long double pvalue alpha vector int derive solution incoming sample using mle estimates int orimdst long double long double mixture used sample random orimdst orimdst orimdst orimdst asset arrays holds optimal solutions original sample avoid rebuild optimal solutions procedure orprob long double orprob ormean long double ormean orstd long double orstd orllval const long double orimdst cout string endl string endl processing asset endl string endl string endl endl forward int iteration limits depend procedure backward int end set null alternative hypotheses cbfsol else cbfsol write details iteration cout string endl string endl asset hypothesis test hnum direction endl string endl string endl endl string endl string endl asset best fit component normal mixture endl endl asset best fit component normal mixture endl string endl string endl backward processing check whether hypothesis already tested issue warning retrieve use existing solution int else different cout endl warning hypothesis already tested using tmptxt alpha see test rebuild input array need null dist generate bootstrap samples note components test hnum index optimal solution components int long double int general solution stored orxyz example orprob array single element orprob ormean array elements ormean ormean orstd array elements orstd orstd orstd report current optimal solution actual sample forward processing dbug cout endl optimal solution orllval endl getvratio endl showvals const long double variance ratio cout endl simulation used approximate null distribution lrt statistic sample generated local optimum found reduce simulation count also reduce simulation count lrt statistic happen local optimum found full model smaller likelihood optimum value reduced model may local optimum adjusted count held nbootsadj lrt value current solution numerator uses parameters already estimated need fit new solution using components incoming data denominator lrt done fit estimates transferred double arrays hold optimal solutions component sizes note forward processing hnum int oromdst long double note orimdst exactly hnum components forward processing less comps orllval hnum nstrts hnum const long double orimdst oromdst instantiate arrays size transfer optimal solution storage across component sizes orprob hnum new long double ormean hnum new long double orstd hnum new long double int orprob hnum ormean hnum orstd hnum algorithm fails converge original data must exit program orllval hnum lnegval cout error asset converge local optimum attempting fit components endl try increasing random starts decreasing components increasing variance ratio constraint endl exiting fitmixdist endl exit also exit algorithm finds inferior optimum compared fewer components orllval orllval hnum cout error likelihood asset fitting components less likelihood endl fitting hnum components inferior local optimum found increase random starts prevent endl exiting fitmixdist endl exit statlrt nsmpls orllval orllval report lrt statistic value actual sample dbug cout lrt details original sample orllval orllval lrts asset statlrt nsmpls endl endl bootstrap lrt statistic determine sampling dist reduced model one mle derived given components test components dbug cout processing bootstrap samples instantiate arrays hold optimal solutions reused sample int long double long double test hypothesis int nsmpls dbug cout nsmpls cout endl string else dbug cout string endl string endl string asset hypothesis test hnum start processing bootstrap sample nsmpls endl string endl string endl endl generate sample size ntpoints reduced model null hypothesis reduced model using mles correct set returns bootstrapping lrt statistic use values probs means stds generate random starts access solution using components less sample use model fitted generate random starts fitting model specified getrvals const long double tmprtrn fit components components models form lrt statistic random start always mixture tmprtrn tmprtrn lrt tmprtrn nstrts const long double lrt lnegval lrt tmprtrn nstrts const long double random observation null distribution following lrt reduce simulation size count lrt statistic negative data originate mixture components data originate mixture components lrt lnegval lrt lnegval statlrt lrt lrt else statlrt denominator gets decremented ways lrt statistic negative means inferior local optimum found local optimum found fitting either null component alternative component distributions spurious leading unboundedness statlrt report lrt statistic value bootstrap sample dbug lrt lnegval lrt lnegval statlrt cout lrt details sample lrt lrt lrt lrt lrts bootstrap sample nsmpls statlrt endl endl else lrt lnegval lrt lnegval lrt lnegval cout algorithm find local optimum tstid sample discarded endl endl else cout negative lrt test statistic statlrt sample discarded endl inferior largest local optimum found endl endl clear sample solution arrays int delete delete clear array holds optimal solution original sample int delete dbug cout nsmpls endl determine test result rejected move temporary values permanant placeholders null hypothesis rejected set variable stop sas int nsmpls statlrt statlrt statlrt nsmpls long double uncomment write lrt array file error folder values form null distribution test statistic ofstream fout errfolder long long long long long long fout lrt statistic original data statlrt nsmpls endl endl fout null distribution lrt statistic values nsmpls bootstrap samples set values missing endl endl int nsmpls fout statlrt statlrt endl else pvalue write hypothesis test result retest report prior result note retest sas sas holds prior test number sas cout endl string endl hypothesis test hnum endl hypothesis test uses nbootsadj valid lrts values endl pvalcntr values cout resulting testing pvalue result endl string resampling along lrts value original sample sample lrt statistic endl alpha endl else cout endl string endl hypothesis test hnum retest hypothesis test sas endl string endl cout resulting testing pvalue alpha endl accept reject hypothesis rejected considered better fit forward testing additional components integer value backward testing considered acceptable fit rejected integer value keep track best solutions forward backward testing index optimal solution accessing orxyz double arrays finished processing orxyz orxyz pvalue rejected set cbfsol components else stop backward processing first rejected else curopt int cbfsol int cbbsol displaying current solution user forward processing max components reset orimdst clear oromdst end input output arrays orimdst densities generate random starts fitting original sample one additional component note original double mixture arrays needed forward processing int delete orimdst orimdst long double int orimdst delete oromdst oromdst report current best solution testing hypothesis cout null hypothesis rejected favor alternative hypothesis endl stage curopt normal mixture provides best fit asset endl endl string endl string endl endl processor cool dbug cout endl processor cool double minutes endl sleep cdown display clear containers cout asset test number endl string endl int cout hypothesis test setfill setw setw setw pvalue cout alpha alpha cout values orllval orllval endl insert optimal solution double array passed function components returned function order mixture returned respective means int cbbsol int cbbsol ormean ormean account ties int cbbsol fnlmdst cmpord populate outgoing array fnlmdst cmpord fnlmdst cmpord fnlmdst cmpord display final result asset fprms cbbsol free parameters cout endl distribution asset cbbsol normal mixture full details endl endl orllval endl variance ratio getvratio cbbsol orstd endl aic long double fprms orllval endl bic orllval long double fprms log long double endl aicc long double fprms orllval long double fprms fprms long double fprms endl density parameters endl showvals cbbsol const long double fnlmdst cout endl string endl string endl write densities output file asset along ofstream fout else fout string endl asset optimal densities endl string endl int maxcmps fout endl optimal solution endl string fprms free parameters fout endl orllval endl variance ratio getvratio orstd endl aic long double fprms orllval endl bic orllval long double fprms log long double endl aicc long double fprms orllval long double fprms fprms long double fprms endl density parameters endl int fout string prob orprob mean ormean orstd endl fout endl endl asset test number endl string endl int fout hypothesis test setfill setw setw setw pvalue fout alpha alpha fout values orllval orllval endl fout endl endl optimal density asset endl string endl showvals cbbsol const long double fnlmdst fout fout endl endl clear vectors free temporary memory allocations int delete orimdst orimdst delete oromdst oromdst delete delete orimdst delete oromdst delete int maxcmps delete orprob orprob delete ormean ormean delete orstd orstd delete orprob delete ormean delete orstd delete tmprtrn delete statlrt delete orllval delete delete delete return optimal components return cbbsol copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function thrdemalg summary function threads algorithm individual thread used maximize likelihood function random start likelihood functions mixture pdfs multiple local optimums unbounded mles parameters yield largest local maximums removing spurious optimizers spurious optimizers single component used fit one small number closely clustered observations cases variance corresponding component becomes small approaches zero drives likelihood value infinity spurious optimizers eliminated imposing variance ratio constraint allow ratio largest smallest variance across components exceed given constant prevent variance approaching zero small number user must set variance ratio constraint value appropriate data application intent construct density function memorizes training data value set high intent build density extends well data value set low global constant stdratio set header file square root desired variance ratio constraint threads set independent processing units computer running application well components mixture fit random starts parameter setting control file parameter random starts equal cores random starts specified parameter cores indpendent processing units computer running application components univariate mixture density fit random starts therefore increases size density fit random start assigned thread launches emalg function find likelihood maximizer based random start unique value reflects local maximum nearest parameter settings random start threads finish parameter settings random start yields largest likelihood value obeying variance ratio constraint taken mles empty double array populated values maximum function value returned call inputs total observations time points current application specified user via control file parameter array size holding time point asset currently processed random starts use finding estimates specific mixture density specified user via control file parameter note value taken multiple cores also increased multiple components increasing random starts used components increases nstrts components univariate mixture distribution used generate random starts means algorithm example fitting univariate mixture distribution asset optimal univariate mixture distribution available asset mixture distribution used generate outg argument function means starting point constructing random starts mixture distribution available example bootstrapping lrt components mixture used generate means random start note recall random start built generating random values serve component means observation attached nearest mean component probability component attached divided total observations standard deviation component sample standard deviation observations assigned ing array hold univariate mixture distribution used generate means random start density function ing components mentioned double array indexed component refers component probability refers component mean refers component standard deviation noted generating random starts algorithm density function similar components fit desirable therefore fitting component mixture density optimal mixture density available used generate random starts optimal mixture density available univariate normal distribution mixture used generate means random start inmdst components outgoing univariate mixture density fit using algorithm outg empty double array hold fitted univariate mixture distribution fit using algorithm double array indexed component refers component probability refers component mean refers component standard deviation outmdist outputs function returns optimal univariate mixture fit using algorithm function call also populates supplied empty double array outmdist corresponding optimal univariate mixture distribution include long double thrdemalg const int const long double const int const int ing const long double inmdist const int outg long double outmdist local variables long double llval handle trivial case first case outg int outmdist transfer probabilities means standard deviations llval getllval outg const long double outmdist compute else local variables find independent processing units create array thread objects using random start multiplication factor specified control file random starts specified control file multiple independent processing units int int boost int long double long double long double long double long double boost boost string cmt spc display message details optimization process dbug cout threading algorithm use random starts current optimization endl dbug cout note variance ratio constraint violated maximum iterations reached invalid probability detected endl set arbitrarily large negative cout lnegval endl cout endl create arrays hold random starts return values algorithm random start must specify unknown parameters mixture component probabilities means standard deviations arrays reused within run group int rprbs new long double outg rmns new long double outg rstds new long double outg runprms new int runprms runprms iterate random starts determine start launch optimization calls int launch call emalg thread threads finish scan likelihood values across solutions inner loop select largest local optimum optimum outer loop repeats process number times boost emalg boost outg boost rprbs boost rmns boost rstds boost llvals boost inmdist boost runprms pause finish save optimal solution group runs int report results requested dbug runprms variance ratio constraint violated solution used else runprms maximum iterations reached solution used else runprms llvals llvals else llvals llvals else runprms invalid probability encountered solution used cout outg random start setfill setw long long runprms cout spc llvals cout iterations setfill setw long long miters runprms cmt endl retrieve optimal solution scan random starts locate one associated highest likelihood value optimal probabilities means standard deviations transferred placeholders passed function local optimum found set return code otherwise use return code fstgrp int outg outmdist outmdist outmdist else llvals llval int outg outmdist outmdist outmdist dbug cout endl free temporary memory allocations int delete rprbs rprbs delete rmns rmns delete rstds rstds delete runprms runprms delete rprbs delete rmns delete rstds delete runprms delete llvals delete report optimal solution dbug cout outg optimal solution llval endl llval lnegval showvals outg const long double outmdist cout endl variance ratio getvratio outg outmdist endl return value corresponding optimal solution return llval copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function emalg summary function implements algorithm estimating parameters univariate mixture pdf observations univariate mixture pdf viewed incomplete data problem component point missing unobserved time point random variables produce value component value note sets rvs parameters estimated viewed way likelihood function expressed using missing random variables corresponding parameters example parameters unobserved component random variable observation originates component probabilities parameters observed density values means standard deviations component distribution algorithm estimates parameters random variables iteratively follows step select starting values parameters random variables step compute expected values random variables using recent parameter estimates step replace instances missing random variables expected values likelihood function step optimize resulting likelihood function respect parameters random variables step check likelihood value maximized step convergence small condition step convergence condition met step return step starting values step computed random starts parameter values acheived first generating values closest distribution one fit fitting univariate mixture optimal univariate mixture available use mixture generate random starts density available use mixture values taken means observation attached closest mean standard deviation set obserations attached mean computed component standard deviation proportion observations attached mean component probability random start convergence step checked using global constant epsilon set header file step also check conditions exit optimization following error conditions met variance ratio constraint violated control global constant stdratio set header file maximum iterations reached control global constant miters set header file component probability becomes zero negative iterating inputs total observations time points current application specified user via control file parameter array size holding time point asset currently processed components univariate mixture density fit empty array size hold probability component univariate mixture array updated iteration algorithm therefore holds optimal probabilities upon convergence returned array calling function note parameter double array indexed thread assigned thrdemalg invokes function algorithm implemented within threaded calls random start assigned thread prbs empty array size hold mean component univariate mixture array updated iteration algorithm therefore holds optimal means upon convergence returned array calling function note parameter double array indexed thread assigned thrdemalg invokes function algorithm implemented within threaded calls random start assigned thread mns empty array size hold standard deviation component univariate mixture array updated iteration algorithm therefore holds optimal standard deviations upon convergence returned array calling function note parameter double array indexed thread assigned thrdemalg invokes function algorithm implemented within threaded calls random start assigned thread stds array hold value algorithm converges optimal value given random start array indexed thread therefore single call function generates optimal value returned element current thread llval double array hold univariate mixture density used generate random starts function begins generating random start optimizes function based start finds nearest local maximum double array indexed component refers component probability refers component mean refers component standard deviation inmdist array integer parameters holding values passed returned function element holds components mixture density used generate random starts parameterized inmdist parameter function element thread current call element return code takes following values ratio constraint violated iterations reached based change value component probability element iteration convergence algorithm termination one reasons decoded element rprms outputs function return value call updates several empty arrays supplied user prbs array updated final estimated component probabilities mns array updated final estimated component means stds array updated final estimated standard deviations llval array updated final value rprms array updated element functions return code element iterations convergence termination include void emalg const int const long double const int long double prbs long double mns long double stds long double llval const long double inmdist int rprms local variables const int int stop long double oldllval newllval minstd maxstd cprbs ssqrs var long double psum long double long double generate random obs solution rprms components specified inmdist use means generate probabilities standard deviations random start getrvals ing const long double inmdist mns thrd getrprbsstds prbs thrd mns thrd stds thrd check none probabilities zero random sample variance ratio constraint violated random sample note component standard deviation disqualify sample generate new sample probability zero variance ratio constraint violated int prbs thrd stds thrd minstd thrd stds thrd maxstd thrd maxstd stdratio minstd stop store random start updated mixture distribution array debugging compliance functions int ormdst long double umdst long double int ormdst thrd ormdst thrd ormdst thrd store component likelihood component probability pdens using current solution store mixture likelihood value mdens observation using current solution needed implement updating equations initial random start parameters also computed used pdens new long double mdens new long double oldllval long double int pdens new long double mdens int pdens thrd getndens mns thrd stds thrd mdens pdens oldllval mdens iterate using algorithm component probabilities updated first independently int get updated component probabilities means standard deviations store temporary placeholders mean standard deviation update formulas work written component probability zero division zero happens end optimization error int derive posterior probabilities component along component probabilities umdst derive component manually last component int postprbs pdens umdst umdst postprbs umdst umdst psum psum umdst else final component probability sum others int postprbs pdens umdst psum exit optimization appropriate code single probability umdst umdst llval thrd rprms rprms else new updating equations faster processing umdst postprbs cprbs ssqrs ssqrs umdst umdst cprbs updating equations result zero stored value negative variance happens variance ratio constraint automatically violated var umdst var else llval thrd rprms rprms find maximum minimum stdev values algorithm stop ratio largest smallest variance exceeds constant variable stdratio set header file prevents unbounded likelihood value constraint violated conclude solution spurious likelihood set large negative ensure never maximum across random starts also stop maximum iterations exceeds value miters set header file component probability stop int umdst minstd minstd umdst umdst maxstd maxstd umdst maxstd stdratio minstd llval thrd rprms rprms else itcntr miters llval thrd rprms rprms variance ratio constraint violated proceed usual stop transfer values permanant placeholders variance ratio constraint violated int prbs thrd mns thrd stds thrd store component likelihood component probability pdens using updated solution store mixture likelihood value mdens observation using updated solution implement updating equations initial random start parameters also computed used newllval long double int mdens int pdens thrd getndens mns thrd stds thrd mdens pdens newllval mdens terminate algorithm criteria met epsilon abs oldllval llval thrd rprms rprms else stop write files debugging rprms ofstream fout errfolder long long long long thrd fout maximum iterations miters reached optimization check solution endl endl fout new value newllval endl fout epsilon epsilon endl endl fout endl original observation vector parameter starting values emalg call endl endl int fout endl fout endl int fout prob ormdst mean ormdst ormdst endl else rprms ofstream fout errfolder long long long long thrd fout probability encountered updating equations work check solution endl endl fout original observation vector parameter starting values emalg call endl endl int fout endl fout endl int fout prob ormdst mean ormdst ormdst endl delete temporary memory allocations int delete pdens pdens delete pdens delete mdens int delete ormdst ormdst delete umdst umdst delete ormdst delete umdst delete postprbs copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getrvals summary function accepts mixture distribution input generates random sample observations distribution observations generate specified user call sample placed empty array supplied function user generating random sample mixture distribution process generate uniform random value compare component probabilities determine component component selected generate observation corresponding component density let probability component let total components uniform random value component selected else component selected else component selected etc component selected observation generated corresponding density function result observation generated supplied mixture distribution reasons generating random observations univariate mixture distribution described application univariate mixture distributions fit using algorithm random starts random start must specify value parameters given univariate mixture density known size unknown parameters algorithm continues increase likelihood function local optimum found based given parameter settings random start optimize mixture density component mixture density available data set generate single random start producing random observations component density values taken means mixture observations assigned component nearest mean using standard distance computation observations assigned closest mean standard deviation component mean computed taking standard deviation corresponding set observations assigned mean probability assigned component observations assigned component divided total observations data set point values parameters means standard deviations component probabilities derived algorithm applied starting given point parameter space algorithm climbs top nearest hill declares local optimum maximum likelihood estimators would set parameters yields largest value likelihood function amongst set local optimums found via large random starts likelihood ratio test lrt used select optimal components univariate mixture density fit given observation set null hypothesis components optimal alternative components optimal fitting univariate mixtures sizes data generate single value lrt statistic value compared null distribution lrt statistic distribution lrt assumption component mixture appropriate size distribution lrt known asymptotically due relevant regularity conditions satisfied estimate null distribution lrt bootstrapping see mclachlan bootstrap lrt distribution generate random sample distribution specified size data set note specify particular univariate mixture rather components namely take estimates data components distribution governed null hypothesis distribution used generate sample observations sample fit univariate mixture components applying algorithm using random starts single value lrt statistic produced repeating process large times approximate null distribution lrt statistic compare value derived data set reject null hypothesis lrt large critical point determined user choice alpha type error test type error probability probability null hypothesis rejected true function used generate random samples used lrt described inputs random observations generate supplied univariate mixture density observations inserted empty array size supplied user last parameter function components univariate mixture density sample generated double array holding univariate mixture distribution definition sample size generated array indexed component refers component probability refers component mean refers component standard deviation inmdst empty array size populated function random sample supplied univariate mixture distribution rvls outputs function populates empty array random sample size supplied univariate mixture distribution value returned function call include void getrvals const int const int const long double inmdist long double rvls generate random observations current optimal solution uses components means use random start fitting specific model observations used approximate null distribution lrt statistic gen long double ndist new long double long double udist define array normal distribution objects one component int ndist long double inmdist inmdist generate observations existing mixture distribution int cid long double uval psum int initialize variables generate uniform random value uval udist gen find corresponding component int cid uval psum else psum psum inmdist generate single obs component store array provided rvls ndist cid gen free temporary memory allocations delete ndist copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getrprbsstds summary function accepts set observations along set means mixture distribution derives corresponding values component probabilities standard deviations observation assigned single component mean observations assigned component divided total observations corresponding estimate component probability observations assigned components using simple distance function specifically observation assigned component nearest mean standard deviation component estimated sample standard deviation set assigned component assuming mean known empty arrays size supplied function hold set component probabilities set standard deviations respectively combined existing set means derived random sample via function getrvals arrays mean standard deviation component probability completely define mixture distribution means generated random sample defines single random start fitting mixture distribution using algorithm inputs total time points data collected array holding set observations returns asset processed means components array provided parameter used generate random start mixture density empty array size populated function component probabilities univariate mixture density constructed prbs array means components use basis generating mixture distribution mns empty array size populated function standard deviations univariate mixture density constructed stds outputs function populates two empty arrays size supplied prbs stds component probability estimates standard deviation estimates components defined means supplied mns value returned function call include void getrprbsstds const int const long double const int long double prbs const long double mns long double stds local variables long double long double mindist int cid int initialize component counter arrays zeros int ccntr cssqrs iterate observations classify component whose mean closest int int abs mindist cid cssqrs cid cid compute estimated probabilities standard deviations mles int prbs long double ccntr long double stds sqrt long double cssqrs delete temporary memory allocations delete ccntr delete cssqrs copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function asgnobs summary function assigns observation time point application one univariate mixture components given asset asset fit univariate mixture distribution containing certain components component viewed generator observations asset corresponding component probability observation originates single component density bayes decision rule used assign observation corresponding component computing posterior probability observation originates component observation assigned component highest posterior probability function performs task user supplies empty array given asset size equal time points inserted array position component observation likely originates therefore assigned inputs empty array size hold component observation assigned determined function inary total number time points data collected array size holding returns asset processed number univariate components asset processed optimal univariate mixture distribution fit given asset array indexed component refers component probability refers component mean refers component standard deviation inmdst outputs function populates empty array size supplied component given observation likely originates bayes decision rule used observation assigned component highest posterior probability function returns value call include void asgnobs int inary const int const long double const int const long double inmdst declare local variables long double maxprob tmpprob assign observation per time point component highest posterior probability bayes rule int int const long double inmdst tmpprob maxprob inary copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function mapcells summary multidimensional grid formed using components univariate marginal mixture densities assets assets asset univariate mixture components multidimensional grid used basis multivariate density total cells function converts multidimensional grid single holding cells value represents one cell grid therefore contains set components one per asset example first element list contains assets first component element list contains assets first component final asset component element list contains assets first component last asset component etc list ordered design matrix full factorial experiment left right values right repeatedly cycling components set values left function populates empty list supplied keep track component levels used given cell term added list making array indexed unique cell asset element would indicate component first asset within first cell multidimensional grid element would indicate component level asset within first cell multidimensional grid total assets elements positions would indicate set components assets within cell multidimensional grid essentially function converts multidimensional grid list easier manage cell list contains term identify contents list item single cell example suppose assets components respective univariate mixture densities multidimensional grid formed crossing univariate components across assets contain total cells grid forms basis building multivariate mixture density list length used represent cell components contained within cell follows cell asset asset asset array index example last row defined incellary incellary incellary note list derived starting last asset repeatedly cycling component levels proceeding last asset cycling component levels set defined proceeding last asset cycling component levels sets defined right strategy used convert multidimensional grid single list function recursive single call asset call cycles levels given asset invoking call asset right level final asset additional recursive calls made levels asset posted manner function begins asset burrows inward asset expands outward back asset debug level set value details mapping printed review similar table shown inputs total cells multidimensional grid assets asset components corresponding univariate mixture density totcells number assets current application numa array populated function indexed unique cell asset begin value contained component level asset within unique cell incellary current asset processed function processes asset separately iterates component levels component level function recursively invokes next asset similarly processes component order curast array hold components asset respective univariate mixture density array hold values described incmps integer value hold current unique cell value begins ends cell defined written incellary array value incremented cid array hold component level processed asset function starts first asset iterates levels corresponding univariate mixture level function invoked recursively process next asset function iterates levels asset recursively invokes process next asset final asset cell completely defined result appended list array persistant size numa hold current component level processed asset final asset final recursive call numa components held array defines given unique cell tmpary outputs function converts multidimensional grid formed crossing assets univariate component levels single list element list defines one cell multidimensional grid straightforward navigate grid manner list elements first unique cell array component levels define given cell function recursive invokes returns value call include void mapcells const int totcells const int numa int incellary const int curast const int incmps int cid int tmpary output cell mappings debugging mode curast dbug cout string endl cell mappings endl string endl iterate components current asset component recursively invoke function process next asset int incmps curast store component level current asset tmpary curast recursive call processing final asset processing final asset populate array increment cell counter curast mapcells totcells numa incellary incmps cid tmpary else dbug cout cell setfill setw long long totcells cid int numa dbug cout asset tmpary incellary cid tmpary dbug cout endl cid cid copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getcell summary function accepts set component levels one per asset returns unique cell multidimensional grid contains set grid contains cell combination univariate mixture components across assets match found value returned function iterates array supplied match found returns cell exits array supplied indexed unique cell asset inputs array cell begin value contained component level asset within unique cell incellary total cells multidimensional grid assets asset components corresponding univariate mixture pdf totcells array size numa containing set asset component levels attempting match unique cell match returned call cmplvls number assets current application numa outputs function searches match set numa asset component levels returns unique cell cell ranges return match found include int getcell const int incellary const int totcells const int cmplvls const int numa iterate mapped values find match int cntr int totcells int numa incellary cmplvls check match return cell position match cntr numa dbug cout cell endl return match return match copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function solvelp summary function solves linear programs lps become feasible initial solutions maximizing multivariate mixture distribution likelihood univariate mixture densities asset already fit using algorithm combined grid grid levels dimension corresponding component combinations multidimensional grid forms basis multivariate density function also mixture pdf cell multivariate grid defines unique combination assets components using bayes decision rule assign observation given asset time point corresponding univariate component based component highest probability membership using individual component memberships combine assign multivariate observation single cell multidimensional grid probability observation originating grid cell observations given cell divided total observations time points estimated probability new observations originates grid cell refer estimate applicable cell note grid cell defines multivariate density function using corresponding means variances define cell point covariances undefined zero important aspect research must maintain univariate marginals already fit accomplish sum probabilities cell containing given must equal corresponding probability component univariate density implies use linear constraints grid cell probabilities maintain marginals since sum grid cell probabilites must equal univariate components constraints needed maintain univariate marginal asset constraints enforced final sum probabilities constraint component automatically enforced fact sum probabilities equals means total total components across assets total assets constraints required maintain marginals set linear equality constraints formulated matrix notation matrix one row holding coefficients single constraint matrix column decision variable single grid cell probability probability multivariate mixture density refer lhs constraint matrix rhs constraint vector vector hold component probabilities univariate marginals first components needed since probability final component within asset automatically enforced last row final constraint sum probabilities equals note lhs constraint matrix full row rank meaning include minimum constraints needed enforce marginals rows must linearly independent requirement needed future optimization uses matrix cell probabilities estimated using data via bayes decision rule see generally satisfy marginal constraints therefore estimates general maintain marginals purpose function find cell probabilities maintain marginals way close possible estimated probabilities offer methods first formulate problem assign unknown decision variable unique cell multidimensional grid represents true probability membership cell decision variables estimated using following objectives minimize maximum distance estimated unique cell probabilties decision variable represents cell minimize sum squared distances decision variables estimated unique cell probabilities classic minimax objective form min max unique cells multidimensional grid decision variable true cell probability cell corresponding estimated cell probability distance two objective select minimizes maximum distance values constraints satisfied marginal densities maintained written objective contains absolute values therefore linear however rewritten equivalent linear program example note objective rewritten min max since absolute values removed next let max objective becomes min note following inequalities must hold maximum set values therefore must members set since objective minimize must take one values bounds optimal solution using new objective added constraints problem equivalent linear program lastly important keep multivariate density parsimonious possible meaning carrying fewer unique cells multivariate mixture density desirable fewer components translates fewer parameters also run problems optimizing multivariate mixture likelihood component included generates zero likelihood value set observations time points future computation see component probabilities prefixed corresponding likelihood value objective likelihood multivariate component density zero time points decision variable effectively removed problem corresponding hessian full rank problem applying newton method example prevent components zero likelihood carried merely satisfy constraints add decision variable unique cell multidimensional grid penalizes objective function cell zero observations included solution help guarantee keep cells contain actual data points prevents likelihoods zero time points similar objective minimize sum squared distances actual estimated probabilities subject marginal constraints objective takes form min note term sum quadratic concave centered furthermore decision variable exists term sum objective known quadratic program separable definition means linear however approximated arbitrarily close linear program since define set decision variables approximate concave quadratic term sum range approximation consist line segments trace term range first set number line segments use tracing curve done via global constant dlvl defined header file current next compute values horizontal axis probability axis equidistant cover region note points current setting line segments points fixed dlvl known also change per term sum points probability axis used trace concave term objective points known compute function evaluated point example considering first term sum function evaluated point values fixed constant dlvl determined estimates known change per term sum total components quantities defined connecting dots function values trace curve decision variables true probabilities unique cell decision variables alpha values term sum sum alpha variable attached segment boundary horizontal axis probability value defined using alpha variables bound value example create value using alpha values general define note alpha variables specific component sum lastly quadratic objective component estimated arbitrarily close known cell estimate constant multipliers defined stored variables objective function constraints linear alpha decision variables within component alpha variables must sum defined use variables build constraints maintain marginals linear objective linear constraints approximates quadratic separable program linear programs solved fast global solution using simplex algorithm use free library functions solve lps variable summary minimax objective total decision variables defined array vbl totcells vbl vbl hold totcells totcells probability decision variables detailed referenced column totcells vbl totcells vbl hold totcells totcells totcells feasibility factor decision variables referenced column totcells one constraint true cell probability value decision variable must total observations fall cell plus cell corresponding feasiblity factor thus cell zero actual observations assigned needs probability optimal solution feasibility factor must set positive value satisfy constraint objective add feasibility factors multiplied large constant set header file applies penalty objective cell observations included optimal solution makes occurence rare noted want avoid including grid cells zero likelihood across time points since lead hessian future optimization vbl totcells holds objective function decision variable referenced column minimum sum squared distances objective total decision variables defined array vbl totcells vbl vbl dlvl hold alpha values sum within unique cell multidimensional grid say corresponding optimal probability value derived using values combined corresponding segment endpoints vbl vbl hold alpha values sum within unique cell multidimensional grid say corresponding optimal probability value derived using values combined corresponding segment endpoints etc cont vbl vbl totcells hold alpha values sum within last unique cell multidimensional grid say corresponding optimal probability value derived using values combined corresponding segment endpoints vbl totcells vbl totcells hold single feasiblity factor assigned per cell corresponding cell probability must observations assigned cell variable observations cell required optimal solution feasibility factor must forced positive value objective function contains term feasibility factor multiplied large constant see header file serves penalty objective grid cell observations actual data included optimal solution want avoid solutions whenever possible since lead hessian matrix upcoming optimization non full rank inputs total unique cells multidimensional grid formed combining components across estimated univariate mixture density functions assets example total assets considered univariate mixture densities levels respectively complete multidimensional grid unique cells function provides methods determining cells important needed multivariate mixture density totcells total number assets considered problem numa double array indexed unique cell values asset values value held position univariate mixture component level asset within unique cell recall multidimensional grid formed crossing component refers back univariate mixture density asset incellary array size numa holding univariate mixture components asset index array return number components needed fit univariate mixture component range cmps double array indexed asset component asset ranges cmps corresponding univariate mixture component probability stored indexed position prbs array size totcells holds number observations time points assigned given unique cell multidimensional grid observations assigned specific components asset using bayes decision rule assigned component highest probability membership time point processed assigned component asset defines cell multidimensional grid assigned example ncellobs implies observations fall unique cell ncellobs array estimated cell probabilities totcells derived ncellobs drive optimizations cellprob empty array true probabilities estimated lps populated function array size totcells outprbs type use given function call minimax objective minimum sum squared distances ssd objective type outputs function counts number probabilities array outprbs returns value call function also populates empty array outprbs estimated true probability unique cell close estimated values using bayes decision rule satisfies marginal constraints include int solvelp const int totcells const int numa const int incellary const int cmps const long double prbs const int ncellobs const long double cellprob double outprbs const int type initialize local variables variable decision variables given depends type lprec int int totcells int dvars double double dvars objval double double string strlabel char char dvars build model derive joint density zero covariances dvars null cout error model build something wrong setup type endl exiting solvelp endl exit add labels decision variables include cell index values minimax objective include cell index alpha index minimum squared distance objective type minimax objective int totcells long long int numa long long long long incellary vlabels char vlabels vlabels label corresponding feasibility factor decision variables come probability feasibility factor pairs strlabel vlabels char vlabels vlabels vlabels char vlabels int dvars vlabels else type sum squared distances objective int totcells int build array constants piecewise linear function endpoints dlvl segments therefore endpoints change cell build array corresponding function endpoints change cell since est prob changes epnt double fpnt epnt double cellprob labels alpha using minimum squared distance objective long long long long vlabels char vlabels vlabels label feasibility factor using minimum squared distance objective long long int numa long long long long incellary vlabels totcells char vlabels totcells totcells vlabels totcells add marginal constraints cell probabilities sum probabilities attached must equal probability asset components first constraints satisfied constraint set since probabilities sum added sum constraint thus keep component true int numa int cmps int totcells incellary type minimax objective vnum vbl else type sum squared distances objective int vnum vbl vbl vnum double prbs cout error issue marginal probability constraint failed load type endl exiting solvelp endl exit add feasibility constraints cell probabilities using minimax minimum squared distance objective individual cell zero observations assigned force cell probability zero relax feasible solution int int totcells type vnum vbl vnum vbl else type int vnum vbl vnum vbl vbl vnum double ncellobs cout error issue feasibility factor constraint failed load type endl exiting solvelp endl exit add inequality constraints minimax objective function objective transformed using appropriate inequality constraints type int int totcells first absolute value constraint objective function pertaining cell probability vnum int dvars vbl vnum vbl vbl vnum cellprob cout error issue minimax objective function absolute value constraint failed load endl exiting solvelp endl exit second absolute value constraint objective function pertaining cell probability vnum int dvars vbl vnum vbl vbl vnum cellprob cout error issue minimax objective function absolute value constraint failed load endl exiting solvelp endl exit sum squared distances objective requires constraint decision variables sum within cell type int int totcells int vnum vbl vbl vnum cout error issue sum squared distances constraint sum decision variables within failed load endl exiting solvelp endl exit add minimization objective output entire formulation requested false type minimax objective vnum int dvars vbl int totcells vnum vbl else type sum squared distances objective int totcells int vnum vbl vnum vbl vbl vnum cout error issue objective failed load type endl exiting solvelp endl exit dbug note uncomment write full details debugging note output large cout string endl details endl string endl stdout solve retrieve results important solve optimal cout error issue solution found something gone type endl exiting solvelp endl exit get objective function value well value unknown probabilities define multivariate density output values vbl compute probabilities minimax values first total cells decision variables min ssd objective weighted sum decision variables int totcells type outprbs vbl else type outprbs int outprbs outprbs vbl epnt write estimated probabilities along empirical values corresponding distance estimated probabilities solution probabilities debugging requested dbug add correct labels using sum squared distances objective type int totcells long long int numa long long long long incellary vlabels char vlabels vlabels output estimated probabilities along actual absolute distance values cout endl objective function type value objval endl cout solution endl endl int totcells cout outprbs actual cellprob abs diff abs outprbs endl type int int cout vbl endl else type int totcells cout vbl endl issue warning cell zero observations assigned probability int totcells cellprob outprbs cout endl warning issue unique cell observations assigned probability type endl danger likelihood function using cell density could zero time points occurs stage endl optimization eliminate decision variable unique cell probability objective function step endl causing corresponding hessian singular upper left block singular border matrix solution endl change alphas simpler solution used may many unique cells also variance endl ratio constraint may large resulting spurious solutions combined across assets resulting cells obs endl endl free memory allocated labels array delete vnum delete vbl delete epnt delete fpnt int dvars delete vlabels vlabels delete vlabels count cell probability decision variables components multivariate density returned function int totcells outprbs return copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getcmtrx summary function ensures constraint matrix used maintain marginal mixture densities full row rank using component probabilities given solution full row rank component matrix required future optimization marginal probability densities maintained via set linear constraints multidimensional grid cell probabilities since sum probabilities must equal asset require univariate mixture components asset constraints maintain marginals note constraint last univariate component asset automatically satisfied final constraint sum cell probabilities equals assets components asset univariate mixture density total rows original constraint matrix used solve lps constraints written matrix form vector contains corresponding probabilities row vector holds unique cell probabilities decision variables multidimensional grid solved rows elements since elements therefore matrix full row rank fit many elements zero effective constraint matrix applies solution constraint matrix columns correspond zeros vector removed effective constraint matrix necessarily full row rank solved consider example case assets components corresponding univariate mixture densities scenario multidimensional grid suppose optimal solution contains probabilities diagonal grid zeros positions matrix rows initially however fit decision variables columns remain therefore effective constraint matrix dimension full row rank rows needed enforce marginal constraints given current solution rows may dropped function determines rows dropped removes producing full row rank effective constraint matrix needed upcoming optimization promote parsimony fitted multivariate mixture density components zero probabilities always permanantly eliminated point optimization dropped component permitted return multivariate density inputs number rows original constraint matrix solving either minimax minimum sum squared distances ssd totrows number probabilities decision variables solving either minimax minimum sum squared distances ssd nucmps type objective sum squared distances ssd type total number assets consideration numa array hold univariate mixture components asset ncmps array unique cell ids multidimensional grid probs solution cells used structure initial multivariate mixture pdf covariances vcids double array indexed unique cell values asset values value held position univariate mixture component level asset within unique cell recall multidimensional grid formed crossing univariate univariate component refers back univariate mixture density asset incellary double array indexed asset indicator values thru component optimal univariate mixture distribution asset values thru ncmps corresponding univariate mixture component probability held array note probabilities used construct last element vector marginals maintained via prbs empty matrix hold full row rank version given solution function derives returns corresponding matrix inlhs empty vector hold corresponding probabilities new full row rank version vector derived original vector without corresponding rows dropped make full row rank inrhs outputs function returns value call derives populates empty lhs matrix rhs vector full row rank version constraint set maintains marginals constraints linear form include void getcmtrx const int totrows const int nucmps const int type const int numa const int ncmps const int vcids const int incellary const long double prbs eigen inlhs eigen inrhs local variables int rnk trnk eigen olhs totrows nucmps eigen orhs totrows tvec nucmps build modified constraint matrix applies solution int numa int ncmps int nucmps olhs int incellary vcids orhs prbs int nucmps olhs orhs rank constraint matrix columns removed rnk int eigen eigen olhs full row rank modified constraint matrix must rows equal rank inlhs type eigen rnk nucmps inrhs type eigen rnk constraint matrix full row rank identify totrows rnk rows removed constraint matrix make full row rank rnk totrows remrows new int int totrows store values row set row zeros int nucmps tvec olhs recheck rank changes replace row original values otherwise leave zeros store row dropped trnk int eigen eigen olhs trnk rnk int nucmps olhs else remrows populate full row rank modified constraint matrix int totrows int remrows int nucmps inlhs type inrhs type print modified constraint matrix debugging dbug cout endl modified full lhs constraint matrix endl inlhs type endl cout endl modified rhs constraint vector endl inrhs type endl int chkrnk chkrnk int eigen eigen inlhs type cout endl rank modified constraint matrix chkrnk endl free temporary memory allocations delete remrows copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function ecmealg summary extension ecme algorithm liu rubin implemented function multivariate mixture likelihood optimized respect mixing proportions covariances means variances held fixed maintain mixture marginals step optimization convex mixing proportions constrained maintain mixture marginals corresponding lagrangian formed step optimization zeros determined iteratively using newton method resulting mixture proportions unique global optimizers step likelihood parameters fixed stage step convergence achieved processing passed step likelihoood maximized respect covariances optimization constrained corresponding estimated matrices matrix per multivariate density component also convex may multiple local optimums likelihood function covariances unknown constraint convex goal step find largest local optimum given means variances mixing proportions fixed covariances unknown attempt climb top current hill local optimum using gradient hessian also searching larger hills general direction steepest ascent see marquardt considered compromise strictly applying newton method gradient ascent useful newton method overshoots single step lands infeasible region progress made step return step newly estimated covariances repeat optimization mixing proportions convergence achieved step fails improve likelihood function returned step steps iterative corresponding gradients hessians updated repeatedly single corresponding iteration extended ecme algorithm solutions found considered spurious differs search solution dealing univariate mixtures due fact variances already fixed change corresponding variance ratio constraint specified user remains force justify approach noting commercial software packages sas use based algorithm find mle univariate mixture density instead algorithm proven extended ecme method used guarantee convergence largest local optimum located nearest local optimum vicinity informed start important note step imposes equality constraints maintain mixture marginals constraints mixing proportions probabilities explicitly imposed therefore negative probabilities may maximize step likelihood function step likelihood function treats mixing probabilities unknowns parameters means variances covariances known constants components require negative probability maximize likelihood function dropped step entire problem resized accordingly fewer components may result likelihood decreases however overall objective balance parsimony maximizing likelihood function inputs total number time points data collected double array returns asset time point indexed number assets returns collected numa number unique components multivariate mixture results either minimax minimum ssd optimizations multivariate mixture fixed means variances function optimize mixing probabilities covariance terms covariance terms begin optimization zero nucmps lhs matrix required enforce constraint marginal density asset equals fixed univariate mixture matrix built optimization resized accordingly ensure full row rank lhs matrix column component multivariate density cmtrx rhs vector required enforce constraint marginal density asset equals fixed univariate mixture vector built optimization contains marginal mixture component probabilities asset less last probability asset fixed others fixed asset cvctr array multivariate mixture probabilities returned corresponding optimization either minimax minimum ssd converted vector within program functions require values stored vector muprbs note probabilities passed array array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc means supplied function fixed change required maintain marginals exception components may dropped component dropped corresponding mean vector component dropped mumns array matrices component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component variances supplied function fixed change required maintain marginals exception components may dropped component dropped corresponding matrix component dropped muvcs array unique cell ids link component multivariate density back full factorial components full factorial components represents cell multidimensional grid formed considering combinations assets levels note full factorial would required build multivariate mixture density given marginals assumption assets mutually independent random variables ucellids string hold directory output file resides rdir outputs function updates arrays multivariate mixture probabilities muprbs mean vectors mumns matrices muvcs note mumns updated components dropped muvcs updated components dropped covariances estimated function returns total number unique multivariate mixture components final density include int ecmealg const int const long double const int numa const int nucmps const eigen cmtrx const eigen cvctr long double muprbs eigen mumns eigen muvcs int ucellids const string rdir local variables long long mhessmag int ecnvg mcnvg cnvg int itr ncovs int boost nupdts int ncormult ncores int nthrds eitrs mitrs sumval nbeats nthrds int mtch npos long double long double long double double log long double ucmps curmlt oldll long double nthrds long double ell mll lbound sumll uval cnum eigen eigen eigen eigen eigen eigen eigen eigen eigen eigen nthrds eigen eigen eigen nucmps eigen eigen eigen eigen int numa eigen eigen eigen string lblvar ndef boost boost nthrds gen long double udist eigen eigen egnslvr normslvr normslvrt ofstream fout populate array vectors returns time point indexed ease computations int fvals new long double nucmps rts numa int numa rts derive inverses corresponding determinants int ucmps vcminv numa numa vcminv sqdets vcminv populate covariance identifier matrices use step int numa int numa itr numa numa int numa int numa itr else itr initialize decision variables step probabilities set values determined solving corresponding lagrange multipliers initialized zeros int dvarse int ucmps dvarse int dvarse populate double array likelihood values timepoint component multivariate likelihood observation also stored array dnom component zero likelihood time points variable exist objective function treated constant moved right hand side constraint problem needs resized accordingly check made via function call chksum numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals cout initial value endl endl chksum ucmps const long double fvals build corresponding hessian lagrangian matrix stored hesse building hessian iterate invertible multiplying constraint lhs rhs constant value multiple full rank note matrix large small eigenvalues may may computational issues attempting invert hesse ucmps const long double fvals dnom cmtrx cvctr hesse lhs rhs build gradient lagrangian using modifed constraint required grade getgrade ucmps const long double fvals dnom lhs rhs dvarse grade initialize arrays used reused step int nthrds int long double tmpdvarsm eigen iterate using ecme algorithm convergence iterate update probabilities step maximization problem concave objective convex constraints stationary points lagrangian therefore taken global optimizers determined using newton method hessian bordered matrix invertible certain met conditions sections step constrained maximization problem multiple local optimums attempt find largest local optimum using iterative technique steps general direction steepest ascent ecme step cout endl step start ecme algorithm ecmeitr beginning cout oldll endl endl string iterating step iteration counter cout solve new component probabilities decision variables step dvarse grade dvarse dvarse check zero negative probabilities prepare next iteration int ucmps dvarse resize problem needed perform another iteration int rhs update constraint undo multiplier adjust size multivariate component probabilities set zero tmpcmtrx nlms ucmps int int lhs int int lhs dvarse tmpcmtrx lhs tmpcvctr nlms tmpcvctr rhs component dropped update mean vectors matrices unique cell ids int int lhs dvarse mumns itr muvcs itr vcminv itr itr sqdets itr vcminv itr ucellids resize internal array holds likelihood values timepoint component components changes int delete fvals fvals long double ucmps update decision variable vectors dropping relevant rows tmpdv int dvarse tmpdv delete dvarse eigen int dvarse int int lhs tmpdv dvarse int int lhs int tmpdv dvarse delete tmpdv eigen update density function values grid likelihood function values numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals check step convergence need component probabilities unchanged otherwise rebuild iterate oldll epsilon oldll int ucmps muprbs step convergence iterate ecnvg reset valid negative probabilities rebuild hessian delete lhs eigen delete rhs eigen delete hesse eigen hesse ucmps const long double fvals dnom tmpcmtrx tmpcvctr hesse lhs rhs rebuild gradient delete temporary memory allocations delete grade eigen grade getgrade ucmps const long double fvals dnom lhs rhs dvarse grade delete tmpcmtrx eigen delete tmpcvctr eigen ecnvg cout done endl endl step done ecme algorithm ecmeitr converged eitrs iterations cout ell endl endl ecme step int ucmps numa ncovs int dvarsm ncovs getcovs ucmps muvcs dvarsm dvarsm gradm ncovs hessm ncovs ncovs ncovs ncovs pinv ncovs ncovs int nthrds int tmpdvarsm ncovs cout endl step start ecme algorithm ecmeitr beginning cout oldll endl endl step iteration counter build gradient step getgradm rts ucmps numa const long double fvals dnom muprbs mumns muvcs vcminv gradm new build hessian step find length digits element largest magnitude long long gethessm rts ucmps numa const long double fvals dnom muprbs mumns muvcs vcminv hessm write max eigenvalue condition eigenvalues hessian derived hessm true pinv false pinv pinv false int int hessm abs abs cnum sqrt sqrt cout endl string total eigenvalues npos hessian condition cnum endl function stephessm uses hessian step direction gradient step add random constant diagonal larger random constants translating smaller steps smaller random constants translating larger steps eigen run long long mitrs threads launched cout lblvar int nthrds int mhessmag long double minhessadd mitersh tmpdvarsm tmpdvarsm stephessm boost boost boost rts boost tmpdvarsm gradm hessm dvarse boost mumns boost muvcs conditionally output line feed alignment spaces threads successfully launched dbug sleep int nthrds sumval nthrds cout string oldll endl string threads finished sleep int nthrds cout sumval nthrds pause threads finish int nthrds randomly select one top nbeats performers begin next iteration weight values favor higher values value nbeats set header file values exceed current considered beats int maxhll int nthrds values returned less existing maximum oldll cout error ecme algorithm step stepping function returned value inferior current maximum happen endl must inspect fix thread endl current maximum oldll endl maximum value returned stepping function endl exiting ecmealg endl exit find process improvements int nthrds int deal ties oldll maxhll oldll itr maxhll itr store given beat beat itr store thread index given beat sumll sumll maxhll itr sum magnitude improvements record randomly select one beats begin next step iteration nupdts uval udist gen sumll lbound sumll maxhll uval lbound mtch else sumll sumll maxhll mtch dvarsm beat else dbug cout done maxhll rrs beat endl endl randomly chosen update inverse matrices along vector corresponding determinants int ucmps setcovs muvcs dvarsm vcminv sqdets vcminv numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals check max equals beat value chosen abs maxhll epsilon maxhll cout error ecme algorithm derived equal beat chosen randomly happen must inspect fix endl value maxhll maxhll endl value endl exiting ecmealg endl exit check convergence step oldll pow epsilon oldll dvarsm mcnvg cout endl step done ecme algorithm ecmeitr converged mitrs iterations new cout mll endl endl check ecme convergence step improve step likelihood prepare another step iteration convergence eigenvalues check concavity mll ell pow epsilon ell free temporary memory allocations delete gradm eigen delete hessm eigen delete eigen delete pinv eigen delete dvarsm eigen delete eigen int nthrds delete tmpdvarsm tmpdvarsm eigen cnvg populate dnom fvals arrays numa ucmps rts dvarse mumns vcminv sqdets picst dnom fvals chksum ucmps const long double fvals build hessian gradient delete lhs eigen delete rhs eigen delete hesse eigen hesse ucmps const long double fvals dnom tmpcmtrx tmpcvctr hesse lhs rhs delete grade eigen grade getgrade ucmps const long double fvals dnom lhs rhs dvarse grade delete temporary memory allocations delete tmpcmtrx eigen delete tmpcvctr eigen processor cool dbug cout endl processor cool double minutes endl sleep cdown cnvg cout ecme algorithm converged ecmeitr iterations maximum endl fout ecme algorithm converged ecmeitr iterations maximum endl delete temporary memory allocations delete dnom delete sqdets delete grade delete gradm delete rts delete vcminv delete hesse delete hessm delete delete pinv delete dvarse delete dvarsm delete lhs delete rhs delete tmpcmtrx delete tmpcvctr delete tmpdv delete delete beat delete maxhll int delete fvals fvals delete fvals int nthrds delete delete delete tmpdvarsm tmpdvarsm delete delete delete tmpdvarsm delete count return final number unique cell probabilities solution return ucmps copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getlfvals summary function decomposes multivariate mixture likelihood grid values time vertical axis component horizontal axis cell dimensional grid likelihood value data timepoint using corresponding multivariate density function component function computes value grid stores value array supplied parameter addition values row summed using component probabilities weights value multivariate mixture likelihood given time point values derived stored single array supplied parameter lastly log values computed parameter taken summed across time points value data using supplied multivariate mixture density value call inputs total number time points data collected number assets returns numa number unique components multivariate mixture initial value either minimax minimum ssd optimizations inucmps array vector returns time point indexed vectors returns size numa assets current vector multivariate mixture probabilities initial values either minimax minimum ssd optimizations uprbs array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc inmns array inverse matrices component multivariate mixture density multivariate density function corresponding matrix matrix invertible dimension numa numa diagonals matrix corresponding variances asset within component invcis array square roots determinants inverse matrices parmeter term required construct multivariate normal density insqs parameter equals numa total assets application inpicst single array values sum double array parameter across components time point weighting component corresponding estimated probability value array multivariate mixture likelihood value data individual time point parameter supplied empty function denoms double array likelihood values indexed time component time point likelihood component computed stored double array reuse forms dimensional grid values size txu time points components parameter supplied empty function lfvals outputs function returns value call also populates incoming arrays denoms see parameter lfvals parameter include long double getlfvals const int const int numa const int inucmps const eigen const eigen uprbs const eigen inmns const eigen invcis const long double insqs const long double inpicst long double denoms long double lfvals local variables long double populate containers grid component likelihoods evaluated time point array full likelihood values evaluated time point int denoms int inucmps lfvals inmns invcis insqs inpicst denoms uprbs lfvals denoms denoms else return value return copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getgrade summary function derives gradient ecme algorithm step optimization convex decision variables multivariate mixture component probabilities means variances covariances treated constants objective maximize corresponding function subject constraints marginal densities fixed known univariate mixtures marginal constraints enforced via linear functions decision variables component probabilities incorporating constraints objective form lagrangian stationary points lagrangian unique global optimizers constrainted convex optimization problem points found applying newton method lagrangian newton method requires gradient hessian lagrangian constructed iteration optimization problem converges fails improve function compute gradient lagrangian step gradient vector partial derivatives lagrangian number elements sum unique components multivariate mixture probabilities constraints lagrange multipliers inputs total number time points data collected number unique components multivariate mixture initial value either minimax minimum ssd optimizations inucmps double array likelihood values indexed time component time point likelihood component computed stored reuse forms grid values size txu time points components infvals single array values sum double array parameter across components time point weighting likelihood value corresponding estimated component probability therefore value array multivariate mixture likelihood value data individual time point indnoms lhs matrix needed enforce marginal mixture density constraints constraints linear component probabilities therefore represented using lhs matrix rhs vector matrix may necessary multiply sides constraint constant factor make corresponding hessian full rank computationally columns equal components rows equal constraints lagrange multipliers needed maintain marginal univariate mixtures inlhs rhs constraint vector required enforce fixed marginal density constraints using actual univariate marginal mixture probabilities constraints needed ensure given fixed mixture marginals add multivariate component probabilities equal given marginal mixture probabilities component asset vector scaled corresponding lhs matrix scaled ensure hessian lagrangian full rank computationally inrhs current values decision variables values probabilities needed given double array infvals however need current values lagrange multipliers change iteration therefore pull vector decision variables passed via parameter indvars empty gradient vector filled function vector dimension equal sum unique components lagrange multipliers lagrange multipliers equals rows lhs constraint matrix equals constraints ingrad outputs function populates empty gradient vector supplied return output function call include void getgrade const int const int inucmps const long double infvals const long double indnoms const eigen inlhs const eigen inrhs const eigen indvars eigen ingrad local variables int int gradient partials wrt probabilities int inucmps ingrad int ingrad infvals int int ingrad indvars inlhs gradient partials wrt multipliers int ingrad int inucmps ingrad double inlhs indvars ingrad ingrad inrhs copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function gethesse summary function derives hessian ecme algorithm step optimization convex decision variables multivariate mixture component probabilities means variances covariances treated constants optimization ecme step objective maximize corresponding function subject constraints marginal densities fixed known univariate mixtures marginal constraints enforced via linear functions decision variables component probabilities incorporating constraints objective form lagrangian stationary points lagrangian unique global optimizers constrained convex optimization problem points found applying newton method lagrangian newton method requires gradient hessian lagrangian constructed iteration optimization problem converges fails improve zero lagrangian function compute hessian lagrangian step hessian border matrix since derivative wrt lagrange multipliers always zero therefore block matrix zeros lower right corner border matrix invertible certain conditions block matrices border zero block conditions met optimization however may necessary inflate constraint matrix using constant larger would needed hessian using indicator variables enforce constraints inputs total number time points data collected number unique components multivariate mixture initial values either minimax minimum ssd optimizations nucmps double array likelihood values indexed time component time point likelihood component computed stored reuse forms grid values size txu time points components infvals single array values sum double array parameter across components time point weighting component corresponding estimated probability therefore value array multivariate mixture likelihood value data individual time point indnoms lhs matrix needed enforce marginal mixture density constraints constraints linear decision variables component probabilities therefore represented using lhs matrix rhs vector matrix may multiplied constant ensure hessian returned full rank computationally columns equal components rows equal constraints lagrange multipliers required ensure marginals match fixed mixtures found earlier matrix must full rank therefore multivariate components set zero check remains full rank force full rank removing rows one time code yet implemented problem encountered function getcmtrx used perform task incmtrx rhs constraint vector required enforce fixed marginal density constraints using actual marginal probabilities constraints needed ensure given fixed mixture marginals add multivariate component probabilities equal given marginal mixture probabilities component asset vector scaled corresponding lhs matrix scaled ensure hessian lagrangian invertible incvctr empty hessian matrix filled function matrix square dimension equal unique components plus lagrange multipliers lagrange multipliers equals rows lhs constraint matrix equals constraints inhess empty matrix filled updated lhs constraint matrix scaled ensure resulting hessian invertible noted hessian lagrangian border matrix block zeros lower right corner upper right corner constraint matrix upper left corner hessian original objective function without lagrange multipliers rare cases large values upper left matrix coupled upper right matrix cause matrix conditioned therefore invertible found solution scale constraint matrix large constant multiple lhs rhs constraint given large constant fixes singularity hessian note gradient also uses constraints therefore constraint matrix scaled must use scaled version constructing gradient parameter returns scaled lhs constraint matrix note scale factor returned function inlhs empty vector filled updated rhs constraint values constraint matrix scaled invertible inrhs outputs function returns multiplier used scale constraint matrix vector ensure resulting hessian computationally invertible also populates empty hessian matrix supplied along scaled lhs matrix rhs vector include long double gethesse const int const int inucmps const long double infvals const long double indnoms const eigen incmtrx const eigen incvctr eigen inhess eigen inlhs eigen inrhs local variables int rnk int long double eigen eigen ulhess inucmps inucmps hessian upper left int inucmps int inucmps inhess int inhess inhess infvals infvals indnoms inhess inhess ulhess inhess ulhess proceeding sections check hessian full rank put warning may may prevent optimization working often prevent optimization working int eigen eigen ulhess rnk inucmps dbug cout endl warning hessian matrix singular may prevent component probabilities optimized endl happen various reasons two endl likelihood single component zero time points eliminates decision variable endl upper left matrix large small elements different diagonal positions endl message gethesse endl hessian build upper right lower left lower right sections hessian iterate entire hessian full rank newton method may applied may require multiplying constraints constant lhs rhs first make sure constraint matrix full rank since components may dropped step int eigen eigen incmtrx rnk int cout error detection rank constraint matrix rank ecme step likely due endl components probabilities dropped function getcmtrx used fix endl sequentially removing linearly dependent rows constraint matrix becomes full rank endl exiting gethesse endl exit inlhs int inucmps inlhs incmtrx inrhs int inrhs incvctr hessian upper right lower left int inucmps int inhess inlhs inhess hessian lower right int int inhess inhess proceeding ensure entire hessian full rank int eigen eigen inhess another iteration needed update multiplier constraints rnk inrhs inrhs inlhs inlhs rnk mult lposval entire hessian full rank put warning exit optimization may still succeed rnk dbug cout endl warning step hessian matrix singular may prevent multivariate component probabilities optimized endl happen various reasons including endl large small eigenvalues resulting matrix endl components dropped constraint matrix longer full row rank note already checked endl message gethesse endl free temporary memory allocations delete ulhess return multiplier used correct hessian return mult copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function getgradm summary function derives gradient ecme algorithm step optimization evidently convex decision variables covariances general mixture density likelihoods concave functions many local optimums dealing multivariate mixture density means variances component probabilities treated constants ecme step optimization objective maximize corresponding function covariances unknown subject constraints matrices positive definite seek covariances maximize multivariate mixture function matrix component positive definite means variances component probabilities fixed known points found applying modified newton method covariances unknown newton method requires gradient hessian constructed iteration gradient vector first order partial derivatives wrt covariance term derived function hessian matrix second order partial derivatives wrt covariance terms problem total assets returns measured components multivariate mixture density total unique covariance terms require estimation clearly problem suffers curse dimensionality work best limited number assets relative number observations time points optimization problem converges fails improve method used find largest local optimum due marquardt uses hessian step general direction gradient adding constant hessian diagonals prior solving updating equation practice iterate large number random step sizes searching function maximizer maximizer found recompute gradient hessian iterate note large additive quantities added hessian diagonal translate small steps small quantities translate large steps additive factor zero translates using newton method without modification assumed overshoot local optimizer method approprate strictly applying newton method overshoots goal find largest local optimum vicinity carefully constructed starting point solution also search outside current attempt find better solution constraints resulting matrices enforced implictly step resulting matrix decomposed eigenvalues inspected none resulting matrix positive definite otherwise ridge repair immediately performed stepping continues using repaired matrix general find large regions covariance set stepping proceeds without need repairs large regions covariance set repairs needed step matrix multivariate component examined repaired necessary function ridgerpr feasible region covariance set results component matrices valid positive definite inputs total number time points data collected array vector returns time point indexed vectors returns size numa assets number unique components multivariate mixture initial value either minimax minimum ssd optimizations inucmps number assets returns collected numa double array likelihood values indexed time component time point likelihood component computed stored reuse forms grid values size txu time points components infvals single array values sum double array parameter across components time point weighting likelihood value corresponding estimated component probability therefore value array multivariate mixture likelihood value data individual time point indnoms current array multivariate mixture probabilities initial values either minimax minimum ssd optimizations inprbs array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc mumns array matrices component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component array matrix inverses component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component parameter holds corresponding array inverses matrices einv array identifier matrices covariance term matrix decomposed sum matrix diagonal elements term unique covariance constant matrix multiplied covariance term element location corresponding covariance term constant matrices contained array constant matrices identical across components reused matrices number unique covariance terms single multivariate mixture numa ina empty gradient vector filled function vector dimension equal total covariances problem problem total assets returns measured components multivariate mixture density total covariance terms ingrad outputs function populates empty gradient vector supplied return output function call include void getgradm const int eigen const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs eigen mumns const eigen const eigen einv const eigen ina eigen ingrad int chk local variables int itra long double qtijk eigen eigen ejk numa numa populate gradient vector int inucmps int numa int numa getcofm numa ejk ingrad itr int qtijk einv ina itra einv ejk ingrad itr itr inprbs infvals qtijk delete temporary memory allocations delete ejk copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function gethessm summary function derives hessian matrix ecme algorithm step optimization evidently convex decision variables covariances general mixture density likelihood function strictly concave many local optimums dealing multivariate mixture density means variances component probabilities fixed ecme step optimization objective maximize corresponding function covariances unknown subject constraints resulting matrices positive definite seek covariances maximize multivariate mixture function matrix component positive definite means variances component probabilities fixed points found applying modified newton method covariances unknown newton method requires gradient hessian constructed iteration gradient vector first order partial derivatives wrt covariance term derived function getgradm hessian matrix second order partial derivatives wrt covariance terms derived function problem total assets returns measured components multivariate mixture density total unique covariance terms require estimation problem suffers curse dimensionality work best limited number assets relative observations time points optimization problem converges fails improve method used find largest local optimum due marquardt uses hessian step general direction gradient adding constant hessian diagonals prior solving updating equation practice iterate large number random step sizes searching best local global function maximizer maximizer found recompute gradient hessian iterate note large additive quantities translate small steps small quantities translate large steps additive factor zero translates using newton method without modification method appropriate strictly applying newton method overshoots goal find largest local optimum vicinity carefully constructed starting point solution using small step sizes also search outside current attempt find better solution using large step sizes constraints resulting matrices enforced implictly step resulting matrix decomposed eigenvalues inspected none resulting matrix positive definite otherwise ridge repair immediately performed stepping continues using repaired matrix general find large regions covariance set stepping proceeds without need repairs large regions covariance set repairs needed step matrix multivariate component examined repaired necessary function ridgerpr feasible region covariance set results component matrices valid positive definite inputs total number time points data collected array vector returns time point indexed vectors returns size numa assets number unique components multivariate mixture initial value either minimax minimum ssd optimizations inucmps number assets returns collected numa double array likelihood values indexed time component time point likelihood component computed stored reuse forms grid values size txu time points components infvals single array values sum double array parameter across components time point weighting likelihood value corresponding estimated component probability therefore value array multivariate mixture likelihood value data individual time point indnoms current array multivariate mixture probabilities initial values either minimax minimum ssd optimizations inprbs array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc mumns array matrices component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component array matrix inverses component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component parameter holds inverses matrices einv array identifier matrices covariance term matrix decomposed sum matrix diagonal elements term unique covariance constant matrix multiplied covariance term element location corresponding covariance term constant matrices contained array constant matrices identical across components reused matrices number unique covariance terms single multivariate mixture numa ina empty hessian matrix filled function matrix square dimension equal total covariances problem problem total assets returns measured components multivariate mixture density total covariance terms inhess internal variables note hessian element partial wrt sigma another partial wrt sigma component index partial derivative component index partial derivative paired index identifies covariance term component whereas paired index identifies covariance term component note covariance term equivalent covariance term therefore also assume itrajk index indicator matrix array covariance term partial itrars index indicator matrix array covariance term partial fti product likelihood value observations time using density component corresponding component probability partial derivative fti defined wrt covariance term component note fti constant since contain covariance term function ftp product likelihood value observations time using density component corresponding component probability qtijk extra term arises numerator differentiating density component wrt covariance term qtprs extra term arises numerator differentiating density component wrt covariance term partial derivative qtijk wrt sigma overall likelihood data points using full multivariate mixture density partial derivative wrt sigma note assets distinct covariance terms within component total distinct covariance terms outputs function populates empty hessian matrix supplied magnitude largest element returned call used help determine best step size include long double gethessm const int eigen const int inucmps const int numa const long double infvals const long double indnoms const long double inprbs const eigen mumns const eigen const eigen einv const eigen ina eigen inhess local variables int itrajk itrars long double fti ftp qtijk qtprs eigen eigen eigen eigen eigen eijk numa numa eprs numa numa ejkrs numa numa ejksr numa numa eigen numa numa populate hessian matrix int inucmps int numa int numa covariance term fixed cov hessian column entry indicator thru getcofm numa eijk build eijk int inucmps int numa int numa covariance term fixed cov unconditional quantities functions time getcofm numa eprs build ers conditional quantities functions time covariance term component getcofm numa eijk ejkrs build ejkrs conditionally getcofm numa eijk ejksr build ejksr conditionally eijk eprs ejksr ejkrs ina itrajk einv ina itrars einv einv ina itrars einv ina itrajk einv initialize element inhess iterate time dimension int derive unconditional quantities functions time infvals einv ina itrajk einv eijk infvals einv ina itrars einv eprs derive conditional quantities functions time including hessian value correlation term component qtprs qtprs partial wrt term diagonal else partial wrt term diagonal else partial wrt term diagonal inhess fti qtijk fti qtijk else inhess fti qtijk ftp qtprs populate corresponding element diagonal inhess hessian column entry indicator thru delete temporary memory allocations delete eijk delete eprs delete ejkrs delete ejksr return magnitude largest element int int inhess int int inhess abs inhess maxmag inhess return maxmag copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function stephessm summary function steps general direction steepest ascent multivariate mixture likelihood maintains marginal mixture densities using marquardt multivariate mixture likelihood fixed means variances component probabilities unknown covariances multivariate mixture therefore function unknown covariances seek maximize problem function multiple local optimums using marquardt constant term random size diagonals correpsonding hessian matrix prior solving updating equations implementing newton method adding large constant results taking small step adding small constant results taking large step method allow take small steps towards local optimum current hill simultaneously search larger hills general direction steepest ascent minimum step size set zero header file see minhessadd maximum step size set digits maximum hessian element within range step size randomly generated iteration stepping see wrapper invokes function save time total threads used equal independent processing units running application multiplied global constant ncormult also set header file thread take total mitersh random sized steps also global constant set header file cover step size range using total ncormult independent processing units threads first generate random value maximum digits largest hessian element generate step size randomly iteration step taken random step size hessian diagonals solving newton method updating equations decision variables unique covariances total multivariate mixture components total assets obtaining new solution corresponding matrices confirmed positive definite ridge repair immediately performed using random multiplier values rrmultmin rrmultmax specified global constants header file decision variables maximize likelihood function returned maximum value along random step size generates maximum best solution across threaded calls used current iteration ecme step stepping finishes processing returns top step step gradient hessian rebuilt another iteration ecmealg inputs input supplied function integer array element position time points data element position unique componenents current multivariate mixture solution element position current thread determined function generates threaded calls thread used within function reporting results output window example debugging element position indicator current thread launched supplied function generated function long double array elements positions input parameters elements positions output generated function returned calling function input position current optimal value attempting improve upon ecme step iteration input position step size input position starting value stepping threaded call elements positions placeholders return values maximum value found stepping iteration returned element step size multiplier generates maximum returned element array vector returns time point indexed vectors returns size numa assets current vector step decision variables covariances array elements vector indices hold current covariance estimates vector index holds returned estimates maximize function call vector position holds updated covariance estimates derived step indvars current gradient vector evaluated current values covariance estimates decision variables ingrad current hessian matrix evaluated current values covariance estimates decision variables inhess current vector multivariate mixture probabilities initial values either minimax minimum ssd optimizations uprbs array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc mumns array matrices component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa diagonals matrix corresponding variances asset within component invcs outputs function updates element incoming parameter indvars array covariance estimates maximize stepping iteration addition elements positions incoming array updated maximum value step size multiplier maximizes respectively include void stephessm int long double const eigen eigen indvars const eigen ingrad const eigen inhess const eigen uprbs const eigen inmns const eigen invcs local variables eigen int rpr int inmns expon curexp long double hmult double log long double inucmps long double int long double int jmp mult gen long double udist eigen eigen thessm eigen inucmps eigen inucmps idm getidm idm size array holding component likelihood values time points int int long double inucmps size local inverse matrices int inucmps initialize covariance maximizers starting values write details debugging indvars dbug cout else dbug cout endl cout launching thread endl set multiple matrix repairs applies steps mult rrmultmin udist gen rrmultmax rrmultmin iterate using hessian solve new covariances int mitersh hessian element maximum length equal hmaxlen digits randomly select value use max stepping expon int udist gen double hmaxlen jmp pow expon hmult minhessadd udist gen jmp udist gen hmult indicate value means exponent used backward stepping hmult idm indvars udist gen ingrad indvars indvars check new covariance estimates yield matrices repair broken update corresponding vector current decision variables int inucmps setcovs indvars ridgerpr mult int inucmps uprbs inmns picst rpr getcovs inucmps indvars check existing maximum larger update current maximum covariance array curmaxll indvars return maximum value along mutliplier generated write details debugging dbug else dbug cout done thread started strt maximum curmaxll endl delete temporary memory allocations delete idm delete delete delete delete int int delete delete copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function ridgerpr summary function accepts matrix input determines whether performs ridge repair make matrix positive definite determine whether matrix positive definite eigenvalue decomposition performed eigenvalues matrix positive definite necessary sufficient condition matrix valid positive definite perform ridge repair diagonal elements multiplied constant value since diagonals variances implies increase variances automatically reduce size covariances relative variances also reduce magnitude correlations covariances correlations multiplied standard deviations cov rho std std std std increase cov remains constant correlations decrease magnitude constant multiplier large enough drive correlations near zero point covariances extremely small relative variances resulting matrix approaches diagonal matrix positive definite point use small multiplier increase iteratively repaired matrix becomes positive definite scale back variances undisturbed elements smaller relative diagonal elements matrix positive definite matrix multiplied constant also positive definite easily proven definition positive definite initial multiplier randomly generated calling function bounded global constants rrmultmin rrmultmax set header file function iterates multiplying diagonals rrmult checking positive definiteness iteration iteration index matrix badly broken example correlation term quantities may require large number iterations repair therefore speed processing increase multiplier factor iterations iterations rrmult multiplied iterations etc note matrix supplied function modifiable value updated place care taken ensure diagonals disturbed returned inputs single matrix supplied function using inputs array matrices index identify one checking definiteness repairing necessary index identifies matrix ucell array matrices current multivariate mixture solution multiple used add ridge mult outputs function returns integer value returned matrix repaired returned need repair include int ridgerpr const int ucell eigen long double mult local variables eigen int long double sclfctr det eigen eigen egnslvr eigen newvcm int ucell int ucell matrix required repair ucell false int int ucell pdmineval det detminval repair necessary matrix positive definite ridge repair performed ucell repair matrix int increase variance factor rescale double int int ucell int int ucell newvcm ucell check updated matrix newvcm false int int ucell pdmineval det detminval increase scale factor multiple iterations int replace original matrix repaired version ucell matrix repaired return otherwise return return retvar copyright chris rook program free software redistribute modify terms gnu general public license published free software foundation either version license option later version program distributed hope useful without warranty without even implied warranty merchantability fitness particular purpose see gnu general public license details http filename function wrtdens summary function writes structure final multivariate density along actual values contained structure output parts structure sum multivariate normal densities weighted corresponding component probability using parameters details parameter values component definitions include actual component probability mean vector matrix unique cell component originates respect full factorial combinations defined fitting univariate mixture densities finally inverse matrix printed rank determinant function used write supplied density either standard output using cout last parameter file using ofstream fout definition last parameter inputs specify type starting point transitioning univariate marginal densities multivariate mixture pdf application offer transition methods minimax minimum sum squared distances ssd solved constrained linear programs lps constraints maintain univariate marginal mixture densities string variable takes one two values minimax minimum sum squared distances ssd display purposes informs user transition method used given multivariate density function written typ number unique components multivariate mixture results either minimax minimum ssd optimization starting points nucmps array unique cell ids link component multivariate density back full factorial components full factorial components represents cell multidimensional grid formed considering combinations assets levels note full factorial would required build multivariate mixture density given marginals assumption assets mutuallly independent random variables rvs ucellids array final multivariate mixture component probabilities muprbs array mean vectors component multivariate mixture density multivariate density function set means first element array vector means first multivariate component etc mumns array matrices component multivariate mixture density multivariate density function corresponding matrix matrix dimension numa numa numa total assets diagonals matrix corresponding variances asset within component muvcs output destination variable reference use either cout display screen valid ofstream output object ovar outputs density supplied function written output desination supplied parts first structure density written using parameters followed detailed definition parameters value returned call include void wrtdens const string typ const int nucmps const int ucells const long double muprbs const eigen mumns const eigen muvcs ostream ovar local variables long long long long nucmps string int minimum sum squared distances ssd typ start writing structure multivariate density without actual details values means variances covariances component probabilities ovar endl string endl structure multivariate density function supplied assets using initial typ objective given endl string endl endl int nucmps ovar setfill setw setfill setw setfill setw ovar endl write details component means variances covariances component probabilities ovar endl endl string endl values multivariate normal pdfs mean vector matrix endl string endl int nucmps ovar endl string endl component unique cell ucells endl string endl setfill setw muprbs endl setfill setw endl mumns endl endl muvcs endl setfill setw endl muvcs endl rank int eigen eigen muvcs determinant muvcs endl | 5 |
almost hyperbolic groups almost finitely presented subgroups feb robert kropholler february abstract construct new examples cat groups containing non finitely presented subgroups type cat groups contain copies also give construction groups type free abelian subgroups rank greater introduction subgroups cat groups shown exhibit interesting finiteness properties examples groups satisfying well first examples groups type finitely presented answering question brown definition group satisfies property classifying space kpg finite skeleton definition group type partial resolution trivial module finitely generated projective module easily seen equivalent finite generation equivalent finite presentability groups kernels maps right angled artin group finiteness properties kernel depend solely defining flag complex right angled artin group construction groups type finitely presented contained right angled artin groups free abelian subgroups rank since constructions groups subgroups type see many cases groups subgroups groups type cases maximal rank free abelian subgroup least construct first examples groups type containing free abelian subgroups rank containing subgroup type theorem exists non positively curved space pxq contains subgroups isomorphic contains subgroup type groups constructed type contain free abelian subgroup rank brady asked question whether exist groups type contain notes known examples contain able find examples without reduce bound fraction theorem every positive integer exists group type contains abelian subgroups rank greater future would like extend theorems giving examples hyperbolic groups subgroups author would like thank federico vigolo kindly helping draw several figures contained within author would also like thank martin bridson gareth wilkes reading earlier drafts paper providing helpful constructive comments preliminaries cube complexes cube complex constructed taking collection disjoint cubes gluing together isometries faces standard way cube complexes endowed metrics well known criterion characterises cube complexes locally cat precise definition cube complex given def follows definition cube complex quotient disjoint union cubes equivalence relation restrictions natural projection required satisfy every map injective isometry face onto face pxq pxq definition metric space curved metric locally cat gromov insight allows easily check whether complex nonpositively curved lemma gromov complex curved link vertex cat space cube complex link vertex spherical complex built speherical simplices see gromov realised spherical complex cat flag definition complex flag complex simplicial every set pairwise adjacent vertices spans simplex flag complexes completely determined thus arrive following combinatorial condition cube complexes lemma gromov cube complex curved link every vertex flag complex wish limit rank free abelian subgroups fact limit largest dimension isometrically embedded flat following theorem shows cat world maximal dimension flat maximal rank free abelian subgroup theorem let compact non positively curved cube complex universal cover pxq subgroup isometrically embedded copy moreover quotient iprn finally would like know cube complex hyperbolic bridson shows obstruction containing isometrically embedded flat theorem bridson theorem let compact curved cube complex universal cover hyperbolic exists isometric embedding right angled artin groups right angled artin groups centre lot recent study particularly interesting subgroup structure particular subgroups interesting connections finiteness properties groups shown completeness recall basic theory right angled artin groups definition given flag complex define associated right angled artin group raag group groups curved cube complexes classifying spaces unions tori definition given flag complex salvetti complex defined follows vertex let copy circle cubulated vertex simplex associated torus natural inclusion define equivalence relation generated inclusions map sending circle respective circle extending linearly cubes map inclusion cubical complexes standard fact fundamental group nonpositively curved cube complex proofs following found lemma lemma curved new classifying space construct new classifying space also curved cube complex amenable taking branched covers build classifying space case flag complex satisfies following condition definition simplicial complex structure contained join several discrete sets case say bipartite case tripartite finite simplicial complex given structure subcomplex simplex define cube complex complex follows let vertex vki let copy cubulated two vertices labelled two edges evki definition given ppiq say form vkill remark every simplex simplex unique possibly empty vkill associate following space irj product torus cube let jpppiq given simplices natural inclusion equivalence relation generated via inclusions need prove two key lemmata fundamental group curved lemma proof apply kampen theorem repeatedly lemma curved proof vertices given two vertices cellular isomorphism sends thus need check link one vertex check link let let vertex edge two distinct vertices connected one following conditions holds edge edges come following subcomplexes tells want prove fact flag complex given set vkill pairwise adjacent vertices want show vkill also split two sets vkjj since vertices pairwise adjacent simplex spanning see subcomplex contains fills required simplex noted natural projections injective closed cube interior remark complex requires choice structure given two structures complexes homotopy equivalent isomorphic cube complexes shown figure together examples construction remark let cage graph two vertices edge element complex constructed embeds fact come useful later fact case embedded copy finiteness properties subgroups raags require one theorem finiteness properties subgroups raags homomorphism defined putting integer label vertex sending corresponding generator label definition let homomorphism denote full subcomplex spanned vertices label let full subcomplex spanned vertices theorem theorem let homomorphism following equivalent kernel type respectively every possibly empty dead simplex complex homologically respectively simply connected figure examples labels vertices exhibit structure last example shaded regions identified branched covers cube complexes take branched covers cube complexes get rid high dimensional flats techniques use developed idea take appropriate subset intersects high dimensional flats definition let curved cube complex say branching locus satisfies following conditions locally convex cubical subcomplex lkpc connected cubes first condition required prove curvature preserved taking branched covers second reformulation classical requirement branching locus codimension theory branched covers manifolds ensuring trivial branched covering definition branched cover branching locus result following process take finite covering lift piecewise euclidean metric locally consider induced path metric take metric completion require key results allow conclude process natural resulting complex still curved cube complex lemma brady lemma natural surjection piecewise euclidean cube complex lemma brady lemma finite graph nonpositively curved hyperbolisation dimension section warm key theorems related dimension lower dimensional case carries lot ideas used later throughout section bipartite graph say let let graph two vertices labelled directed noted edges labelled remark complex constructed section subcomplex theorem let bipartite graph let associated raag let classifying space constructed section branched cover hyperbolic fundamental group figure depiction deformation retraction fact many branched covers delivering result course proof pick prime varies different hyperbolic branched covers obtained proof branching locus one vertices given two vertices homeomorphism sends therefore matter vertex pick branching locus choose vertex deformation retracts onto graph seen follows start torus cubulated figure torus remove center vertex complement deformation retracts onto graph depicted figure identify edges via labels result argument shows free group erators denote generators deformation described get map loop length corresponds torus figure deformation retraction gets sent corresponding commutator let prime symmetric group letters let element conjugates generator define cover using map taking cover corresponding stabiliser note commutator means loops length link connected preimage cover take completion resulting complex natural map link vertex maps contain cycles length prove resulting complex hyperbolic universal cover curved cube complex prove know hyperbolic show isometric embeddings theorem embedding contain least one square however square contains one vertex lift link vertex loops length however flat plane isometrically embedded would loop length link every vertex plane contradiction completes proof use theorem along morse theoretic ideas section find examples hyperbolic groups finitely generated subgroups finitely presentable proposition let complete bipartite graph sets fix satisfying hypotheses let branched cover constructed finitely generated subgroup finitely presentable proof case edges put orientation edge edges oriented towards vertex edges oriented away vertex cubulating one vertex one oriented edge define maps edge orientation define precomposing branched covering map lifting universal covers obtain morse function ascending descending links morse functions preimages ascending descending links morse function ascending descending links joins ascending descending links copies ascending descending links copies two possibilities look vertex map ascending descending links remain unchanged still copies vertex question maps study ascending link case identical ascending link loop length taking branched cover cause loop length lengthen preimage still copy follows kernel finitely generated finitely presentable sizeable graphs used give examples hyperbolic groups subgroups type outline procedure producing examples sizeable graphs proposition let sets partitions let complete bipartite graph let associated right angled artin group classifying space branched covering constructed theorem let vertex mapping lkpv sizeable proof link vertex cover graph complete bipartite graph sets define defining similarly graph following properties bipartite cover bipartite graph cycles length since branching process designed remove check last property let set vertices mapping complement bipartite structure define similarly must prove connected covered finitely many loops length taking branched cover connected preimage intersection still non empty resulting union connected almost hyperbolisation dimension notation given tripartite complex let lij full subcomplex spanned vertices main theorem section following theorem let tripartite flag complex associated raag classifying space constructed section exists branched cover pxq contains subgroups isomorphic proof branching locus graph vertices edges directed three maps restrictions projections maps section give three maps qij primes picked process taking branched cover theorem let combine permutation representations projection maps get map defines cover taking subgroup corresponding stabiliser complete cover get branched cover let maps retractions see consider natural map figure loop corresponding commutator follows surjective consider link vertex branched cover restrict attention vertex mapping cases similar consider image link three maps image map link sent surjectively onto link section know deformation retracts onto graph loops length sent commutators form map commutator sent must consider image maps maps send link disjoint union contractible subsets maps send image fundamental group link identity see cover corresponding stabiliser preimage one loops length depicted figure components component loop length prove isometrically embedded planes dimension combined theorem complete proof theorem since resulting cube complex cubes dimension see dimension isometrically embedded flat plane copy isometrically embedded would contain least one cube would fact cubical embedding flat vertex contained flat link would contain subcomplex isomorphic octahedron let lift branching locus see intersects cube figure intersect vertex let vertex lkpx tripartite structure octahedron complex tripartite structure form one copies contained vertices corresponding vertices form loop length bipartite graph defined edges however constructed branched figure intersection pattern cube cover graph cycles length morse theory morse theory defined general setting affine cell complexes instance shall need curved cube complexes remainder section let cat cube complex let group acts freely cellularly properly cocompactly let homomorphism let act translations recall characteristic map cube definition say function morse function satisfies following conditions every cube dimension map extends affine map constant image discrete pxq consider level sets function denote follows definition closed subset denote preimage also use denote preimage kernel acts cube complex manner preserving level set moreover acts properly cocompactly level sets use topological properties level sets gain information finiteness properties group need examine vary pass larger level sets theorem lemma closed intervals contains vertices inclusion homotopy equivalence contains vertices topological properties different difference encoded ascending descending links definition ascending link vertex tlkpw pwq minimum lkpv descending link vertex tlkpw pwq maximum lkpv theorem lemma let morse function suppose connected closed min min resp max max contains one point homotopy equivalent space obtained coning descending resp ascending links prq deduce lot topology level sets know change pass larger intervals following corollary corollary let ascending descending link homologically inclusion induces isomorphism surjective ascending descending links connected inclusion induces surjection ascending descending links simply connected inclusion induces isomorphism knowing direct limit system contractible space allows compute finiteness properties kernel theorem theorem let morse function let ascending descending links simply connected finitely presented type would also like conditions allow deduce satisfy certain finiteness properties well known result direction proposition brown let group acting freely properly cellularly cocompactly cell complex assume finitely generated type result used prove certain group type however theorems links satisfy assumptions theorem require following theorem kropholler theorem let morse function let suppose vertices reduced homology pvq pvq dimensions assume vertex possibly type type finally prove key theorem regarding ascending descending links theorem let cat cube complex group acting freely properly cocompactly surjective homomorphism morse function morse function prove staement ascending links proof descending links proof really relies following key lemma lemma let cat cube complex morse function full subcomplex lkpv spanned vertices first sight appear improvement however allows simple calculation ascending descending links link vertex known proof let pairwise adjacent vertices proving simplex prove claim let edge corresponding must prove minimum restricted note since extends affine map foliated level sets ptq level sets corresponds linear subspace dimension containing subcubes dimension intersected cube exactly two subspaces intersection single vertex one corresponds minimum maximum one must vertex mapping minimum proof theorem define morse function satisfies conditions morse function respect map link vertex easy see ascending link full subcomplex spanned equal groups type question brady asks whether exist groups type contain notes known examples contain able find examples without drastically reduce rank free abelian subgroup shown following theorem every positive integer exists group type contains abelian subgroups rank greater proof general construction require pgi hyperbolic group cat cube complex free cocompact action surjective homomorphism morse function also require ascending descending links least one existence morse function would show theorem subgroup type namely let free group rank tree cayley graph respect generators exponent sum homomorphism respect map linear edges whose restriction vertices let group classifying space homomorphism morse function proposition let group classifying space homomorphism morse function could also use examples groups important point note ascending descending links pgi isomorphic let integer let let pgl theorem ascending descending links mod define let residue mod consider morse function cube complex ascending descending links copies cases type theorem group contains free abelian subgroups rank greater since hyperbolic group contains copy subgroups type finitely presented apply branched covering technique carefully chosen flag complexes prove following theorem exists non positively curved space pxq contains subgroups isomorphic contains subgroup type using following steps start connected tripartite flag complex plq perfect group link every vertex connected point take auxiliary complex rplq build complex section define function lifts morse function universal covers examine ascending descending links morse function take branched covering section get complex associated morse function examine ascending descending links morse function show kernel type prove kernel finitely presented complex rplq construction complex required key point complex satisfies proposition required make sure fundamental group links changed branching process firstly prove none assumptions first step restrictive realise finitely presented group fundamental group finite connected simplicial complex also well known barycentric subdivision simplicial complex flag put obvious tripartite structure barycentric subdivision labelling vertices dimension corresponding cell realise group fundamental group connected tripartite flag complex given connected tripartite flag complex homotopy equivalent complex form link every vertex connected point see first note vertex link single point contract edge without changing homotopy type complex next note vertex disconnected link perform following procedure pick two vertices two different components link add extra vertex connect also adding two triangles result reduced number components link without adding extra components link link connected also changed homotopy type since added contractible space glued along contractible subspace repeating procedure make sure link every vertex connected definition simplicial complex octahedralisation splq defined follows vertex let copy every simplex take natural map splq equivalence relations generated inclusions complex splq also seen link vertex salvetti complex raag defined structure natural structure splq let contained splq contained spvn remark map defined tvu extends retraction splq deformation retraction particular plq psplqq proved flag complex splq flag complex lemma assume connected simplicial complex lkpv connected equal point vertices plq psplqq proof let full subcomplex splq spanned set let defined similarly let interior star interior star clear splq contained also see union open simplices contained neither using fact homotopy equivalent similarly homotopy equivalent considering sequence see psplqq isomorphic must prove connected let points always connect open edges contained label edges let end point end point define vertices similarly let pvx sequence vertices corresponding geodesic vertices corresponding vertices adjacent respectively split cases whether vertices geodesic define path four cases let let let let describe get sequence edges corresponding sequence edges give path thus completing proof lemma figure key idea lemma path link vertex viewed sequence edges adjacent edges relabel path let path link let path link done since link every vertex connected sequence edges defines sequence edges require key idea encapsulated figure curved arcs correspond paths define complex rplq let tripartite flag complex plq perfect link every vertex connected point label sets vertices tripartite structure construct splq tripartite flag complex plq perfect add extra vertices type respectively connect vertices type define rplq flag completion resulting complex take simplicial complex let rplq construct cubical complex section morse function noted remark view subcomplex graph two vertices edges labelled vertices well one extra edge labelled edge runs define morse function product putting orientation edge follows edge corresponding vertex splq orient towards vertices orient towards put orientation map graph orientation extend linearly across cubes restricting map get map lifting universal covers get morse function let vertices type splq set let vertex splq splq table ascending descending links morse function ascending descending links morse function given table notation given simplicial complex subset vertices denotes full subcomplex spanned vertices proposition given complex form yvj ordering set vjs stpvm stpvl connected pstpvm stpvl qqq proof case vjs vertices connected simply connected either ordering case vjs let subgraph lpvi connected since connected link every vertex connected point thus assume stpwl stpwm noting stpv splqq cpsplkpv lqqq see stpv splqq stpwt splqq spstpv lqxstpw lqq thus ordering see stpvl splqq stpvm splqqq contains least two points noting stpvl splqqxp stpvm splqqq stpvl splqq stpvm splqq see connected simply connected remark proof gluing contractible complexes along connected complexes shows complexes statement proposition simply connected remark stage proof stpvl splqq stpvm splqqq could covered cycles length since join discrete set copy almost hyperbolisation morse function use almost hyperbolisation technique section get branched cover recall natural length preserving map define function lifting universal covers get morse function follows fundamental group contain copies follows worth noting almost hyperbolisation procedure ensured loops length link vertex connected preimage ascending descending links recall distinguish types vertices label follows vertices type map vertices type map vertices type map vertices type map vertex ascending descending link preimage ascending descending link corresponding vertex type vertices disjoint branching locus small neighbourhood lifts ascending descending links isomorphic corresponding vertex claim vertices type ascending descending links simply connected prove case vertex type ascending link cases similar vertex type lift may assume maps let consider preimage lkpx envisage start removing vertices taking covering remaining space coming derivative map add back vertices remark cover stpvq runs vertices noted remark construct cover way stpvm stpvl connected covered loops length intersection specifically loops length stpvm stpvl procedure passing branched covering associated vertex derivative map blkpvq lkpv lkpbpvq cycles connected preimage map blkpvq therefore preimage stpvm stpvl connected upon taking completion replace vertices corresponds coning lifts links thus see pxq made sequence contractible spaces glued along connected subspaces simply connected proof kernel finitely presented prove finitely presented need following lemma lemma pxi compact intervals case proof purposes proof let rplq theorem know kernel finitely presented since kernel acts cocompactly level set see simply connected let loop larger interval trivial let assume assume latter sequence vertices pvi integer trivial pvi trivial pvi since adding pxm changes fundamental group becomes trivxj also contained pxm ial find loop also know contained since pvi pvj whenever restriction affine map cube one maximum one minimum constant subcube since adding pxm changes see pxm vertex mapping bounds disc intersect follows disc lifts branched covering coming following commutative diagram boundary lifted disc call loop bounded disc could map via would imply bounds disc thus pxj since pxi pxj surjective deduce theorem pxi theorem finitely presentable proof assume finitely presented acts cocompactly add finitely many quotient gain fundamental group taking universal cover space obtained way arrive finitely many attached simply connected words finitely many loops generate pxi direct limit space cat particular contractible pass larger interval loops generate pxi trivial words map pxi pxj trivial also surjective theorem assumed ascending descending links connected thus pxj trivial however know case lemma references agol virtual haken conjecture doc appendix agol daniel groves jason manning bestvina brady morse theory finiteness properties groups invent brady branched coverings cubical complexes subgroups hyperbolic groups lond math bridson existence flat planes spaces nonpositive curvature proceedings american mathematical society january bridson haefliger metric spaces curvature volume grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences berlin brown cohomology groups volume graduate texts mathematics springer new york new york gromov hyperbolic groups chern kaplansky moore singer gersten editors essays group theory volume pages springer new york haglund wise special cube complexes geom funct kropholler hyperbolic groups almost finitely presented subgroups preparation ian leary uncountably many groups type math december arxiv yash lodha hyperbolic group finitely presented subgroup type page london mathematical society lecture note series cambridge university press wise structure groups quasiconvex hierarchy | 4 |
ijacsa international journal advanced computer science applications vol automated classification hand movement eeg signals using advanced feature extraction machine learning mohammad alomari aya samaha khaled alkamha applied science university amman jordan paper propose automated computer platform purpose classifying electroencephalography eeg signals associated left right hand movements using hybrid system uses advanced feature extraction techniques machine learning algorithms known eeg represents brain activity electrical voltage fluctuations along scalp interface bci device enables use brain neural activity communicate others control machines artificial limbs robots without direct physical movements research work aspired find best feature extraction method enables differentiation left right executed fist movements various classification algorithms eeg dataset used research created contributed physionet developers instrumentation system data preprocessed using eeglab matlab toolbox artifacts removal done using aar data epoched basis synchronization cortical potentials mrcp features rhythms isolated analysis delta rhythms isolated mrcp analysis independent component analysis ica spatial filter applied related channels noise reduction isolation artifactually neutrally generated eeg sources final feature vector included erd ers mrcp features addition mean power energy activations resulting independent components ics epoched feature datasets datasets inputted two machinelearning algorithms neural networks nns support vector machines svms intensive experiments carried optimum classification performances obtained using svm respectively research shows method feature extraction holds promise classification various pairs motor movements used bci context mentally control computer machine bci ica mrcp machine learning svm introduction importance understanding brain waves increasing ongoing growth interface bci field computerized systems becoming one main tools making people lives easier bci brainmachine interface bmi become attractive field research applications bci device enables use brain neural activity communicate others control machines artificial limbs robots without direct physical movements term electroencephalography eeg process measuring brain neural activity electrical voltage fluctuations along scalp results current flows brain neurons typical eeg test electrodes fixed scalp monitor record brain electrical activity bci measures eeg signals associated user activity applies different signal processing algorithms purpose translating recorded signals control commands different applications important application bci helping disabled individuals offering new way communication external environment many bci applications described including controlling devices like video games personal computers using thoughts translation bci highly interdisciplinary research topic combines medicine neurology psychology rehabilitation engineering humancomputer interaction hci signal processing machine learning strength bci applications lies way translate neural patterns extracted eeg machine commands improvement interpretation eeg signals become goal many researchers hence research work explores possibility eeg classification left right hand movements offline manner enormously smooth path leading online classification reading executed movements leading technically call reading minds work introduce automated computer system uses advanced feature extraction techniques identify brain activity patterns especially left right hand movements system uses machine learning algorithms extract knowledge embedded recorded patterns provides required decision rules translating thoughts commands seen fig article organized follows brief review related research work provided section section iii dataset used study described automated feature extraction process described section generation datasets practical implementation system evaluation discussed section conclusions suggested future work provided section ijacsa international journal advanced computer science applications vol literature review idea bci originally proposed jaques vidal proved signals recorded brain activity could used effectively represent user intent authors recorded eeg signals three subjects imagining either right left hand movement based visual cue stimulus able classify eeg signals right left hand movements using neural network classifier accuracy concluded accuracy improve increasing number sessions vector consisting patterns beta rhythms coefficients autoregressive model artificial neural networks anns applied two kinds testing datasets average recognition rate achieved strength bci applications depends lies way translate neural patterns extracted eeg machine commands improvement interpretation eeg signals become goal many researchers hence research work explores possibility eeg classification left right hand movements offline manner enormously smooth path leading online classification reading executed movements leading technically call reading minds iii physionet eeg data description dataset eeg dataset used research created contributed physionet developers instrumentation system dataset publically available http fig feature extraction translation machine commands author used features produced motor imagery control robot arm features band power specific frequency bands alpha beta mapped right left limb movements addition used similar features event related desynchronization synchronization comparing signal energy specific frequency bands respect mentally relaxed state shown combination cortical potentials mrcp improves eeg classification offers independent complimentary information hybrid bci control strategy presented authors expanded control functions potential based bci virtual devices related sensorimotor rhythms navigate virtual environment imagined hand movements translated movement commands virtual apartment extremely high testing accuracy results reached bci system presented translation imagined hands foot movements commands operates wheelchair work uses many spatial patterns erd rhythms along cortex resulting classification accuracy online offline tests respectively authors proposed bci system controls hand prosthesis paralyzed people movement thoughts left right hands reported accuracy single trial hand movement classification reported authors analyzed executed imagined hand movement eeg signals created feature dataset consists eeg records different durations one two minutes per record obtained healthy subjects subjects asked perform different tasks eeg signals recorded electrodes along surface scalp subject performed experimental runs baseline runs eyes open baseline runs eyes closed three runs four following tasks left right side screen shows target subject keeps opening closing corresponding fist target disappears relaxes left right side screen shows target subject imagines opening closing corresponding fist target disappears relaxes top bottom screen target appears either subject keeps opening closing either fists case feet case target disappears relaxes top bottom screen target appears either subject imagines opening closing either fists case feet case target disappears relaxes eeg signals recorded according international system excluding electrodes seen fig ijacsa international journal advanced computer science applications vol subset used current work dataset selected three runs first task described opening closing fist based target appears left right side screen runs include eeg data executed hand movements created eeg data subset corresponding first six subjects including three runs executed movement specifically per subject total records automated analysis eeg signals feature extraction channel selection according many eeg channels appeared represent redundant information shown neural activity correlated executed left right hand movements almost exclusively contained within channels eeg channels fig means need analyze channels data hand eight electrode locations commonly used mrcp analysis covering regions frontal central sites fcz channels used independent component analysis ica discussed later current section fig fig schematic diagram proposed system filtering eeg signals known noisy nonstationary filtering data important step get rid unnecessary information raw signals eeglab interactive matlab toolbox used filter eeg signals band pass filter applied remove direct current shifts minimize presence filtering artifacts epoch boundaries notch filter also applied remove line noise automatic artifact removal aar eeg data significance usually mixed huge amounts useless data produced physiological artifacts masks eeg signals artifacts include eye muscle movements constitute challenge field bci research aar automatically removes artifacts eeg data based blind source separation various algorithms aar toolbox implemented eeglab matlab used process eeg data subset two stages electrooculography eog removal using blind source separation bss algorithm electromyography emg removal using algorithm fig electrodes international system eeg epoch extraction splitting aar process continuous eeg data epoched extracting data epochs time locked specific event types ijacsa international journal advanced computer science applications vol sensory inputs motor outputs processed beta rhythms said synchronized rhythms electrophysiological features associated brain normal motor output channels preparing movement executing movement desynchronization beta rhythms occurs referred erd extracted seconds onset movement depicted fig later rhythms synchronize within seconds movement referred ers hand delta rhythms extracted motor cortex within stage referred mrcp slow less mrcp associated negativity occurs seconds onset movement experiments extracted events type left hand type right hand different epoch limits types analysis erd analysis epoch limits seconds ers analysis epoch limits seconds mrcp analysis epoch limits seconds run mrcp left right hand movements subject practical implementation results feature vectors construction numerical representation eeg datasets analyzed described previous section activation vectors calculated resulted epochs datasets multiplication ica weights ica sphere dataset subtracting mean raw data multiplication results mean power energy activations calculated construct feature vectors subject single run feature vectors extracted power features mean features energy features type feature side target resulting feature matrix constructed features represented numerical format suitable use machine learning algorithms every column features matrices normalized datasets could inputted learning algorithms described next subsection machine learning algorithms work neural networks nns support vector machines svms algorithms optimized purpose classifying eeg signals right left hand movements detailed description learning algorithms found matlab neural networks toolbox used experiments number input features features determined number input nodes number different target functions output left right determined number output nodes training handled aid learning algorithm fig epoch extraction mrcp independent component analysis ica aar process ica used parse underlying electrocortical sources eeg signals affected artifacts data decomposition using ica changes basis linearly data collected single scalp channels spatially transformed virtual channel basis row eeg data original scalp channel data represents time course accumulated differences source projections single data channel one reference channels eeglab used run ica described epoched datasets left right erd ers mrcp channels fcz rhythm isolation short iir band pass filter applied epoched datasets experiment purpose isolating rhythms another short iir lowpass filter applied mrcp epoched datasets isolating delta rhythms result files svm experiments carried using mysvm software svm performed different kernels reported provide similar results similar applications anovakernel svm used work optimisation results experiments samples randomly selected used training remaining testing repeated times time datasets randomly mixed experiment number hidden nodes varied svm degree gamma parameters varied mean accuracy calculated ten pairs features used inputs svm symbolized follows power mean energy ijacsa international journal advanced computer science applications vol sample type results experiment summarized table table features accuracy results experiment svm hidden layers accuracy degree gamma clear testing results svm outperforms experiments svm topology degree gamma provides accuracy tested power energy type inputs experiment hidden layers provide accuracy features used results clearly show use advanced feature extraction techniques provides good clear properties translated using machine learning machine commands next best svm performance achieved using energy type features general increase classification performance use discriminative features total energy compared power mean inputs conclusions future research paper focuses classification eeg signals right left fist movements based specific set features good results obtained using nns svms showing offline discrimination right left movement executed hand movements comparable leading bci research methodology best somewhat simplified efficient one satisfies needs researchers field neuroscience near future aim develop implement system online applications health systems computer games addition datasets analyzed better knowledgeable extraction accurate decision rules acknowledgment authors would like acknowledge financial support received applied science university helped accomplishing work article references donoghue connecting cortex machines recent advances brain interfaces nature neuroscience supplement vol levine huggins bement kushwaha schuh passaro rohde ross identification electrocorticogram patterns basis direct brain interface journal clinical neurophysiology vol vallabhaneni wang interface neural engineering springer wolpaw birbaumer mcfarland pfurtscheller vaughan interfaces communication control clinical neurophysiology vol niedermeyer silva electroencephalography basic principles clinical applications related fields lippincott williams wilkins sleight pillai mohan classification executed imagined motor movement eeg signals ann arbor university michigan graimann pfurtscheller allison interfaces gentle introduction interfaces springer berlin heidelberg selim wahed kadah machine learning methodologies interface systems biomedical engineering conference cibec cairo grabianowski interfaces work http smith salvendy krauledat dornhege curio blankertz machine learning applications braincomputer interfacing human interface management information methods techniques tools information design vol springer berlin heidelberg vidal toward direct communication annual review biophysics bioengineering vol pfurtscheller neuper flotzinger pregenzer eegbased discrimination imagination right left hand movement electroencephalography clinical neurophysiology vol sepulveda control robot navigation advances robot navigation barrera intech mohamed towards improved eeg interpretation sensorimotor bci control prosthetic orthotic hand faculty engineering master science engineering johannesburg universityof witwatersrand luo yang zhuang zheng chen hybrid interface control strategy virtual environment journal zhejiang university science vol wang hong gao gao implementation braincomputer interface based three states motor imagery annual international conference ieee engineering medicine biology society guger harkam hertnaes pfurtscheller prosthetic control computer interface bci aaate european conference advancement assistive technology germany kim hwang cho han single trial discrimination right left hand movement eeg signal proceedings annual international conference ieee engineering medicine biology society cancun mexico goldberger amaral glass hausdorff ivanov mark mietus moody peng stanley physiobank physiotoolkit physionet components new research resource complex physiologic signals circulation vol schalk mcfarland hinterberger birbaumer wolpaw interface bci system ieee transactions biomedical engineering vol deecke weinberg brickett magnetic fields human brain accompanying voluntary movements bereitschaftsmagnetfeld experimental brain research vol neuper pfurtscheller evidence distinct beta resonance frequencies human eeg related specific sensorimotor cortical areas clinical neurophysiology vol ijacsa international journal advanced computer science applications vol delorme makeig eeglab open source toolbox analysis eeg dynamics journal neuroscience methods vol bartels automatic artifact removal eeg mixed approach based double blind source separation support vector machine annual international conference ieee engineering medicine biology society embc automatic artifact removal aar toolbox matlab transform methods electroencephalography eeg http joyce gorodnitsky kutas automatic removal eye movement blink artifacts eeg data using blind component separation psychophysiology vol bashashati fatourechi ward birch survey signal processing algorithms interfaces based electrical brain signals journal neural engineering vol vuckovic sepulveda delta band contribution cue based single trial classification real imaginary wrist movement medical biological engineering computing vol dremstrup farina discrimination type speed wrist movements eeg recordings clinical neurophysiology vol gwin ferris eeg independent component analysis mixture models distinguish knee contractions ankle contractions annual international conference ieee engineering medicine biology society embc boston usa makeig bell jung sejnowski independent component analysis electroencephalographic data advances neural information processing systems vol delorme makeig single subject data processing tutorial decomposing data using ica eeglab tutorial http qahwaji colak ipson machine learningbased investigation associations cmes filaments solar physics vol qahwaji colak ipson automated machine learning based prediction cmes based flare associations sol phys vol qahwaji colak automatic solar flare prediction using machine learning sunspot associations solar vol qahwaji colak ipson using real gentle modest adaboost learning algorithms investigate computerised associations coronal mass ejections filaments mosharaka international conference communications computers applications mosharaka researches studies amman jordan fahlmann lebiere learning architecture advances neural information processing systems denver colorado university dortmund lehrstuhl informatik | 9 |
mar concentration inequality excess risk regression random design heteroscedastic noise adrien saumard bretagne loire march abstract prove new general concentration inequality excess risk regression random design heteroscedastic noise specific structure required model except existence suitable function controls local suprema empirical process far case linear contrast estimation tackled literature level generality model solve case quadratic contrast separating behavior linearized empirical process empirical process driven squares functions models keywords regression excess risk empirical process concentration inequality margin relation introduction excess risk fundamental quantity theory statistical learning consequently general theory rates convergence developed nineties early however recently identified theoretical descriptions learning procedures need finer controls brought classical upper bounds excess risk case derivation concentration inequalities excess risk new exiting axis research particular importance obtaining satisfying oracle inequalities various contexts especially linked high dimension field model selection indeed remarked concentration inequalities allow discuss optimality model selection procedures precisely concentration inequalities excess risk excess empirical risk central tools access optimal constants oracle inequalities describing model selection accuracy results put evidence optimality slope heuristics generally selection procedures based estimation minimal penalty statistical frameworks linked regularized quadratic estimation similar assumptions also possible discuss optimality resampling type procedures high dimension convex methods allow design compute efficient estimators reason chatterjee recently focused estimation mean high dimensional gaussian vector convex constraints getting concentration inequality excess risk projected estimator chatterjee proved universal admissibility estimator concentration inequality sharpened extended excess risk estimators minimizing penalized convex criteria also well known see instance weakness theory regularized estimators sparsity context classical oracle inequalities describe performance estimators amount regularization actually depends confidence level considered oracle inequality correspond practice kind estimators whose regularization parameter usually fixed using procedure recently bellec tsybakov building established satisfying oracle inequalities describing performance regularized estimators lasso group lasso slope confidence level independent regularization parameter particular oracle inequalities integrated central tool obtain bound concentration inequality excess risk estimator hand paper extend technology developed order establish new concentration inequality excess risk regression random design heteroscedastic noise appealing since cover regression fixed design homoscedastic gaussian noise assume law design known order perform linearized regression see section strategy follows first remark empirical process interest splits two parts linear process quadratic one prove linear process achieves second order margin condition defined put meaningful conditions quadratic process order handle techniques empirical process theory talagrand type concentration inequalities contraction arguments core approach paper organized follows regression framework well properties linked margin relations described section state main result section proofs deferred section heteroscedastic regression random design setting let sample taking values measurable space typically subset assume following relation holds regression function heteroscedastic noise level take closed convex model common distribution set arg min common distribution pairs contrast defined also denote function called regression function onto model indeed denote quadratic norm holds min consider estimator defined arg min empirical measure associated sample want assess concentration quantity called excess risk estimator around single deterministic point end easy see remark section van geer wainwright following representation formula holds excess risk terms empirical process arg min max shown various settings include linearized regression quantity actually concentrates around following point arg min relation pointed projection regression function order prove concentration inequalities excess risk need check following relation also called quadratic curvature condition exists constant variance respect classical relation statistical learning called margin relation consists assuming holds replaced image target relation satisfied regression whenever response variable uniformly bounded assume belongs thus may different condition exists condition deduce condition exists sup conditions deduce image model also uniformly bounded exists sup precisely convenient following proposition shows relation satisfied regression setting whenever response variable bounded model convex uniformly bounded also found proposition proposition model convex conditions hold exists constant furthermore convenient major gain brought proposition classical margin relation bias model quantity implicitly contained excess risk appearing classical margin relation pushed away inequality proposition thus refinement classical notion margin relation stated contrast easy see extended general situations contrast convex regular sense see proposition section completeness proof proposition found section second order quadratic margin condition first notice arguments empirical process interest decomposed linear quadratic part holds contrast expansion around projection regression function onto associate two empirical processes call respectively linear quadratic empirical process precisely interested local maxima max max follows directly show excess risk concentrates around defined rather around point defined arg min holds around relation type second order margin relation introduced proved following lemma proof available section lemma holds also fundamental difference second order margin relation stated require lemma conditions linear part empirical process empirical process origin takes arguments contrasted functions indeed seems latter empirical process second order margin relation hold hard check regression general difficulty indeed forced van geer wainwright section work linearized regression context quite severe restriction distribution design known statistitian contrary main result stated section stated general regression situation distribution design unknown noise level heteroscedastic main result stating new concentration inequality describe required assumptions order state next condition let denote max condition sequence strictly increasing function function strictly convex condition take exists constant sup notice conditions hold use classical symmetrization contraction arguments positive constant depends set max max also able state main result theorem holds hence moreover inequality theorem new concentration inequality related regression random design heteroscedastic noise convex uniformly bounded model particular extends results related linearized regression simplified framework regression classical general framework described section proof theorem detailed section following corollary provides generic example entering assumptions theorem related linear aggregation via empirical risk minimization define span linear span generated orthonormal dictionary take unit ball centered projection onto assume sup note inequality relating quadratic norm functions linear span dictionary classical estimation satisfied usual functional bases fourier basis wavelets piecewise polynomials regular partition including histograms see instance particular corollary extends concentration inequality recently obtained excess risk erm linear aggregation problem dictionary hand fourier dictionary proofs proofs related section proof proposition take one hand hand latter inequality corresponds fact convex projection onto scalar product functions nonpositive combining gives result proof lemma inequality derives taking expectation sides concerning proof easily seen function concave indeed take arg max triangular inequality gives concavity deduce function convex implies proofs related section proof theorem prove concentration right arguments type deviations left take set intervals also set holds chosen later last inequality used lemma furthermore setting union bound gives index furthermore holds probability first inequality comes lemma lemma putting previous estimates get probability require using assumptions equivalent require whenever last display true also require finish proof fix particular conditions become constant depending since proof corollary assumption max used two times inequality hence furthermore assumption sup convenient thus apply theorem consequently condition turns condition satisfied whenever following theorem see theorem inequalities direct applications bousquet inequalities deduced klein rio theorem conditions satisfied set sup sup holds using proposition conditions simplify bounds given theorem follows lemma conditions satisfied notations theorem also proof constant defined proposition furthermore using fact get conclusion easy obtain using references arlot bach calibration linear estimators minimal penalties bengio schuurmans lafferty williams culotta editors advances neural information processing systems pages arlot lerasle choice density estimation mach learn appear arlot massart calibration penalties regression mach learn electronic arlot improved penalization february barron massart risk bounds model selection via penalization probab theory related fields bellec tsybakov towards study least squares estimators convex penalty arxiv preprint bellec tsybakov slope meets lasso improved oracle bounds optimality ann appear massart minimal penalties gaussian model selection probab theory related fields boucheron massart wilks phenomenon probab theory related fields baudry maugis michel slope heuristics overview implementation stat bousquet bennett concentration inequality application suprema empirical processes math acad sci paris bickel ritov tsybakov simultaneous analysis lasso dantzig selector ann bellec tsybakov bounds prediction error penalized least squares estimators convex penalty vladimir panov editor modern problems stochastic analysis statistics selected contributions honor valentin konakov springer appear celisse optimal density estimation ann chatterjee new perspective least squares convex constraint koltchinskii oracle inequalities empirical risk minimization sparse recovery problems volume lecture notes mathematics springer heidelberg lectures probability summer school held probability summer school ann klein rio concentration around mean maxima empirical processes ann interplay concentration complexity geometry learning theory applications high dimensional data analysis habilitation diriger des recherches december lerasle optimal model selection density estimation stationary data various mixing conditions ann massart concentration inequalities model selection volume lecture notes mathematics springer berlin lectures summer school probability theory held july foreword jean picard muro van geer concentration behavior penalized least squares estimator arxiv preprint appear statistica neerlandica navarro saumard slope heuristics model selection heteroscedastic regression using strongly localized bases esaim probab saumard regular contrast estimation slope heuristics phd thesis rennes october https saumard optimal upper lower bounds true empirical excess risks heteroscedastic regression electron adrien saumard optimality empirical risk minimization linear aggregation bernoulli van geer wainwright concentration regularized empirical risk minimization sankhya | 10 |
geodesic ray bundles buildings mar abstract let building identified davis realisation paper provide visual boundary description geodesic ray bundle geo namely union combinatorial geodesic rays corresponding infinite minimal galleries chamber graph starting pointing towards locally finite hyperbolic show symmetric difference geo geo always finite gives positive answer question huang sabok shinko setting buildings combining results construction bourdon obtain examples hyperbolic groups kazhdan property gromov boundary hyperfinite introduction paper motivated question huang sabok shinko question asking whether proper cocompact hyperbolic space symmetric difference two geodesic ray bundles pointing direction always finite see precise definitions motivated study borel equivalence relations action hyperbolic group gromov boundary authors give positive answer question cat cube complex deduce hyperbolic cubulated group namely acts properly cocompactly cat cube complex gromov boundary hyperfinite induces hyperfinite equivalence relation corollary turns answer question full generality constructed purpose paper give positive answer question hyperbolic locally finite building underline class groups acting properly cocompactly hyperbolic locally finite buildings includes groups kazhdan property thus significantly different class cubulated hyperbolic groups considered see fixed point theorem give precise statement main result classical result davis building realised complete cat metric space viewed subcomplex barycentric subdivision standard geometric realisation let denote set barycenters chambers chamber graph boundary set equivalence classes asymptotic geodesic rays see section precise definitions denote geo union combinatorial geodesic rays barycenter chamber postdoctoral researcher marquis infinite minimal gallery starting pointing towards sense contained tubular neighbourhood geodesic ray towards sets geo called geodesic ray bundles paper give description geodesic ray bundles arbitrary buildings see section proposition building gromov hyperbolic locally finite deduce description following theorem theorem let locally finite hyperbolic building let let symmetric difference geo geo finite immediate consequence theorem theorem deduce following corollary corollary let group acting cocompactly locally finite hyperbolic building assume acts freely chambers natural action gromov boundary hyperfinite see also bourdon constructs family groups property acting cocompactly hyperbolic building groups defined fundamental groups complexes groups standard reference topic follows straightaway form complexes groups involved acts freely set chambers locally finite another example group explicit short presentation also recently appeared particular corollary yields examples hyperbolic groups property whose boundary action hyperfinite corollary exist infinite hyperbolic groups property gromov boundary hyperfinite note group property acts cat cube complex global fixed point particular theorem covers situations covered see also last paragraph introduction acknowledgement would like thank caprace bringing question attention suggesting explore context buildings would also like thank anonymous referee precious comments preliminaries cat gromov hyperbolic spaces standard reference paragraph let complete cat namely complete geodesic metric space every triangle least thin corresponding triangle euclidean space side lengths sense two points distance one another points corresponding respectively euclidean distance given two points unique geodesic segment denote geodesic ray based isometry two geodesic rays called asymptotic geodesic ray bundles buildings equivalently identifying image asymptotic bounded hausdorff distance one another resp contained tubular neighbourhood resp recall tubular neighbourhood subset boundary denoted set equivalence classes geodesic rays two geodesic rays equivalent asymptotic say geodesic ray points towards unique geodesic ray starting pointing towards denote space called gromov hyperbolic triangle sense side contained two also called hyperbolic spaces thought fattened versions trees behavior somehow opposite euclidean space cat space proper every closed ball compact cocompact compact subset isom hyperbolic contain subspace isometric euclidean plane notion gromov boundary hyperbolic space context cat spaces coincides boundary defined endowed cone topology see buildings standard reference paragraph let building viewed simplicial complex see chapter let denote set chambers maximal simplices panel codimension simplex two chambers adjacent share common panel gallery two chambers sequence chambers distinct adjacent integer called length gallery minimal length called minimal gallery length denoted dch map dch metric called chamber distance infinite sequence chambers called minimal gallery minimal gallery contained apartment let apartment let distinct adjacent chambers chamber equal chamber distance yields partition set chambers closer subcomplexes underlying chamber sets called roots intersection called wall separating wall delimiting say two subsets separated gallery resp contained apartment said cross wall wall separating resp gallery minimal crosses wall moreover set walls crossed minimal gallery depends independent choice apartment simplicial map called retraction onto centered following properties marquis identity restriction apartment containing isomorphism inverse particular preserves minimal galleries moreover increase distance dch dch set panels denoted star panel denoted set chambers containing panel chamber unique chamber minimising gallery distance called projection denoted following gate property dch dch dch building called locally finite finite set chambers davis realisation building standard reference paragraph chapter see also let building admits cat called davis realisation complete cat space viewed subcomplex barycentric subdivision standard geometric realisation contains barycenter chamber panel sequel often identify davis realisation related notions apartment chamber panel gallery wall realisation viewed closed subspaces set denotes barycenter chamber apartment also set combinatorial path piecewise geodesic path union geodesic segments xci gallery write xci thus combinatorial paths bijection galleries combinatorial geodesic combinatorial path corresponding minimal gallery one defines similarly infinite combinatorial paths combinatorial geodesic rays abbreviated cgr replacing galleries infinite galleries combinatorial geodesic ray combinatorial geodesic ray starting bounded hausdorff distance geodesic ray pointing towards denote cgr set cgr combinatorial geodesic combinatorial geodesic resp ray resp denote combinatorial path obtained concatenation geodesic segment resp geodesic ray contained minimal gallery hence also apartment particular covered boundaries apartments conversely uniqueness geodesic rays implies apartment course combinatorial geodesic ray also contained apartment every apartment retraction induces retraction properties geodesic ray bundles buildings ones described moreover equality belongs closed chamber let apartment important properties walls found see also wall intersects geodesic resp geodesic ray one point entirely contains geodesic resp geodesic ray particular convex subset two connected components open corresponding components convex saw combinatorial path xci resp xci contained combinatorial geodesic resp cgr crosses wall note proper cat space locally finite building called hyperbolic hyperbolic sense equipped cat metric equivalently hyperbolic hyperbolic resp apartment readily follows properties retractions onto apartments note moussong gave characterisation hyperbolicity terms type see theorem hyperbolic hji affine coxeter system whenever pair disjoint subsets infinite commute fact need hyperbolic buildings however following lemma assume building hyperbolic constant cgr contained proof proposition combinatorial geodesics cat metric lemma follows theorem also basic useful fact combinatorial geodesic rays lemma let let cgr exists cgr combinatorial geodesic proof reasoning inductively may assume adjacent cgr claim clear otherwise combinatorial path combinatorial geodesic dch let combinatorial geodesic cgr done otherwise combinatorial path combinatorial geodesic dch let combinatorial geodesic claim cgr yielding lemma indeed otherwise combinatorial path combinatorial geodesic hence dch dch dch contradiction combinatorial bordification building section recall notion combinatorial bordification building introduced relate notions introduced section marquis let building recall panel projection map associating chamber unique chamber closest defines injective map endow product topology star discrete set chambers minimal combinatorial bordification defined closure since injective may identify subset thus makes sense say sequence chambers converges reduced single apartment notion convergence transparent converges every wall sequence eventually remains side hand back general one identify apartment subset consisting limits sequences chambers fact see proposition apartment let let sequence chambers converging define combinatorial sector based pointing towards conv conv denotes union minimal galleries indeed depends choice sequence converging contained apartment note also contained example let building type apartments euclidean planes tesselated congruent equilateral triangles apartment bordification consists lines points isolated points see example seen follows let direction sense contained tubular neighbourhood wall cgr contained sequence barycenters chambers converges unique sector shown figure singular contained tubular neighbourhood wall set obtained limit cgr vertices simplicial line infinity see dashed line figure combinatorial sectors line represented figure geodesic ray bundles buildings figure direction see combinatorial sectors look like relate notions introduced paragraph identify davis realisation avoid cumbersome notations also identify chambers barycenters thus also identifies notions minimal resp infinite gallery combinatorial geodesic resp ray apartment let denote set walls containing boundary contains geodesic ray towards also let set apartments next define equivalence relation follows distinct adjacent chambers write apartment containing wall separating belong also write becomes symmetric reflexive relation let transitive closure let subcomplex obtained union chambers note start making useful observations relation lemma let assume exists apartment containing exists apartment containing wall separating belong proof implication clear conversely let apartment containing wall separating belong assume contradiction apartment containing wall separating belongs let barycenter common panel definition buildings simplicial isomorphism fixing marquis pointwise since assumption deduce hence contradiction next lemma introduces important terminology notations call cgr cgr straight infinite gallery corresponding contains geodesic ray lemma let following assertions hold let cgr sequence chambers converges denote limit say converges let let cgr cgr contained apartment eventually lie side given wall cgr straight contained every apartment moreover independent choice straight cgr proof since cgr contained apartment readily follows description convergence show eventually lie different sides wall cgrs separated contained tubular neighbourhood must contained tubular neighbourhood claimed let cgr straight let thus also contains since contained wall deduce must contain infinitely many chambers hence also convexity moreover since intersect wall cgr cross wall hence lemma finally cgr straight contained common apartment discussion hence next give alternative description sets proposition let proof let show assume first reasoning inductively length gallery loss generality assuming let cgr straight lemma combinatorial geodesic cgr let apartment containing let cgr straight lemma claim cross wall imply cross wall hence lemma desired indeed separates apartment containing wall belongs increase distance hence geodesic ray bundles buildings contained tubular neighbourhood separates since crossed must separate adjacent chambers containing delimited find exercise apartment containing separates contradicting hypothesis conversely assume let show let cgr cgr straight lemma combinatorial geodesic cgr let apartment containing lemma note walls separating belong otherwise cgrs cross wall would separated wall contradicting assumption hence lemma yielding claim conclude round observations relation following consequences proposition lemma let following assertions hold let apartment containing let walls separating belong let cgr let apartment containing converges walls crossed belong proof implication follows lemma conversely assume proposition straight cgr cgr lemma contained separates also separates hence contradiction readilfy follows get better understanding combinatorial sectors first show given element one choose sequence chambers converging nice controlled way nice mean chosen cgr controlled mean may impose restrictions lemma let straight cgr proof let apartment let sequence chambers converging since space proper sequence geodesic segments subconverges geodesic ray words extracting subsequence may assume contained claim exists finite subset neighbourhood entirely contained one delimited indeed wall intersecting say contains geodesic ray see theorem hence marquis figure singular direction subray hand since locally finite ball radius intersects walls particular walls intersecting whence claim recall wall sequence eventually remains side particular entirely contained associated hence walls separating lie let cgr straight thus lemma know since wall chambers lie side conclude proof lemma desired proposition let cgr starting converging proof let cgr converging prove inclusion show chamber belongs conv every clear conversely let lemmas exists cgr starting converging let conv since replacing portion combinatorial geodesic passing still yields cgr lemma follows next wish prove refinement proposition relating combinatorial bordification visual boundary geodesic ray bundles buildings define transversal graph direction graph vertex set adjacent connected edge exist adjacent chambers elements also called chambers define notions galleries chamber distance note lemma form example context example assume consists single apartment singular direction simplicial line dashed line figure stripes obtained convex hull two adjacent walls namely walls direction another description terms cgr proposition let cgr proof inclusion clear conversely cgr lemma hence lemma lemma let cgr sequence eventually constant words proof let apartment containing thus assume contradiction eventually constant thus infinitely many walls crossed see lemma since contained wall intersect wall hand contained let intersects walls let walls intersect combinatorial geodesic contained walls must intersect geodesic segment yielding desired contradiction formulate announced refinement proposition theorem let cgr converging proof inclusion follows proposition converse inclusion proved exactly proof proposition existence cgr converging following proposition marquis given next wish show combinatorial sector minimal direction sense contained every combinatorial sector see proposition end first need precise version lemma improving control cgr converging given lemma let cgr cgr converging proof let cgr passing let resp combinatorial geodesic resp cgr contained let apartment containing let straight cgr converging particular cross wall claim cgr yielding lemma otherwise wall crossed since crossed separates cgr contained since bounded hausdorff distance one another implies hence cross contradiction proposition let proof note second equality holds proposition let lies cgr theorem hence also cgr converging lemma since proposition theorem conversely certainly theorem lemma remains show let apartment containing thus note first contains indeed let cgr straight sso converges lemma conv claimed let cgr passing converging see theorem let show also cgr converging hence theorem desired write combinatorial geodesic cgr lemma cgr converging claim still cgr desired otherwise wall crossed since crossed separates cgr contained since bounded hausdorff distance one another implies cross contradiction conclude section give consequence hyperbolicity building terms sets lemma assume building hyperbolic bounded set chambers particular moreover locally finite finite geodesic ray bundles buildings proof let lemma note exist constants dch see proposition also let closed chamber barycenter contained neighbourhood see fix let claim cgr crosses walls apartment containing hence imply chambers gallery distance hence lemma follow proposition let thus cgr let apartment containing let straight cgr lemma know contained one another assume contradiction combinatorial geodesic crosses walls let thus dch dch contradiction remark note although lemma sufficient purpose hard see using moussong characterisation hyperbolicity coxeter groups see theorem converse also holds building hyperbolic transversal graph bounded geodesic ray bundles buildings throughout section let building identified davis realisation section keep notations introduced sections also fix denote geo ray bundle geo lies cgr description combinatorial sectors provided section yields following description ray bundles proposition geo proof inclusion clear theorem conversely geo converges proposition theorem yielding converse inclusion first establish theorem inside apartment apartment set geoa lies cgr lemma let geoa geo marquis proof inclusion clear converse inclusion show lies cgr also lies cgr may take preserves combinatorial geodesic rays increase distance lemma let let geoa geoa geoa proof reasoning inductively dch may assume adjacent let wall separating let geoa let show geoa lemma find cgr going converging write cgr contained let also cgr going let cgr contained finally let combinatorial geodesic claim cgr yielding lemma otherwise wall crossed separate cgr hence cgr cgr cross separates cgr contained since bounded hausdorff distance deduce hence cross contradiction lemma assume hyperbolic let let symmetric difference geoa geoa finite proof reasoning inductively dch loss generality assuming adjacent assume contradiction infinite sequence geoa geoa choose cgr passing note denotes combinatorial geodesic contained disjoint geoa geoa geoa geoa lemma contradiction since locally finite sequence subconverges cgr disjoint geoa hand since hyperbolic lemma yields cgr lemma implies geoa large enough contradiction turn proof theorem building rest section assume hyperbolic locally finite finite lemma lemma let let apartment containing let infinite subset infinite proof since finite infinite subset let lemma know geoa finite hence proposition infinite subset contained proposition implies since desired lemma let geo finite geodesic ray bundles buildings proof assume contradiction exists infinite sequence proposition note amounts say nonempty readily follows proposition together lemma let apartment containing lemma know geoa finite may thus assume taking subsequence geoa hence proposition may assume extracting subsequence since proposition proposition implies yielding desired contradiction geo proposition theorem let assume hyperbolic locally finite geo geo finite proof lemma know finite proposition prove geo finite assume contradiction infinite sequence geo lemma extracting subsequence may assume contradicts lemma appendix transversal buildings let building let section construction transversal building direction given however pointed referee premises construction incorrect correct construction one given called transversal graph direction although need fact proof theorem one show transversal graph indeed chamber graph building therefore deserves name transversal building direction since fact used papers devote present appendix proof follow approach buildings opposed simplicial approach standard reference topic let type let view reflection group acting let reflection subgroup generated reflections across walls classical result deodhar coxeter group moreover polyhedral structure induced walls identified coxeter complex precisely fundamental chamber chamber whose walls associated reflections coincides intersection containing whose wall belong fundamental chamber coxeter complex associated coxeter system set reflections across walls delimit lemma group depends choice apartment proof lemma proof lemma remains valid context marquis lemma let let exists contained apartment proof let let cgr straight lemma cgr combinatorial geodesic let apartment containing claim follows replacing theorem transversal graph direction graph chambers building type proof define weyl distance function follows let lemma write set chambers coxeter complex weyl distance function complex see note definition independent choice apartment see lemma proof chambers simplify notations also simply write check building type remains check axioms definition axioms clearly satisfied satisfied building apartment check let show length function respect generating set choose adjacent chambers let also apartment see lemma let wall containing since separated wall apartment containing wall belongs image retraction fixes geodesic ray pointwise hand exercise apartment containing delimited contains hence hand letting denote unique chamber contained hence case desired geodesic ray bundles buildings references peter abramenko kenneth brown buildings graduate texts mathematics vol springer new york theory applications martin bridson haefliger metric spaces curvature grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences vol berlin marc bourdon sur les immeubles fuchsiens leur type ergodic theory dynam systems caprace presentation infinite hyperbolic kazhdan group preprint caprace jean combinatorial compactifications buildings ann inst fourier grenoble michael davis buildings cat geometry cohomology group theory durham london math soc lecture note vol cambridge univ press cambridge vinay deodhar note subgroups generated reflections coxeter groups arch math basel jingyin huang marcin sabok forte shinko hyperfiniteness boundary actions cubulated hyperbolic groups preprint gabor moussong hyperbolic coxeter groups thesis ohio state university noskov asymptotic behavior word metrics coxeter groups doc math graham niblo lawrence reeves groups acting cat cube complexes geom topol guennadi noskov vinberg strong tits alternative subgroups coxeter groups lie theory jacek infinite groups generated involutions kazhdan property forum math nicholas touikan geodesic ray bundles hyperbolic groups preprint ucl belgium address | 4 |
jan product lines sven christian armin christian department informatics mathematics university passau apel groesslinger lengauer school computer science university magdeburg kaestner technical report number department informatics mathematics university passau germany june product lines sven christian armin christian department informatics mathematics university passau apel groesslinger lengauer school computer science university magdeburg kaestner abstract product line family programs share common set features feature implements stakeholder requirement represents design decision configuration option added program involves introduction new structures classes methods refinement existing ones extending methods decomposition programs generated solely basis user selection features composition corresponding feature code key challenge product line engineering guarantee correctness entire product line member programs generated different combinations features number valid feature combinations grows progressively number features feasible check individual programs feasible approach type system check entire code base product line developed type system basis formal model language demonstrate type system ensures every valid program product line type system complete introduction programming fop aims modularization programs terms features feature implements stakeholder requirement typically increment program functionality contemporary programming languages tools ahead xak caesarj featurehouse provide variety mechanisms support specification modularization composition features key idea feature implemented distinct code unit called feature module added base program introduces new structures classes methods refines existing ones extending methods program decomposed features called henceforth typically decomposition orthogonal functional decomposition multitude modularization composition mechanisms developed order allow programmers decompose program along multiple dimensions languages tools provide significant subset mechanisms beside decomposition programs features concept feature useful distinguishing different related programs thus forming software product line typically programs common domain share set features also differ features example suppose email client mobile devices supports protocols imap another client supports mime ssl encryption decomposition two programs features imap mime ssl programs share code feature since mobile devices limited resources unnecessary features removed decomposition programs generated solely basis user selection features composition corresponding feature modules course combinations features legal result correct programs feature model describes features composed combinations programs valid consists ordered set features set constraints feature combinations example email client may different rendering engines html text mozilla engine safari engine one time set feature modules along wit feature model called product line important question correctness programs particular product lines general guaranteed first problem contemporary languages tools usually involve code generation step composition code transformed representation previous work addressed problem modeling mechanisms directly formal syntax semantics core language called feature featherweight java ffj type system ffj ensures composition feature modules paper address second problem correctness entire product line guaranteed naive approach would valid programs product line using type checker like one ffj however approach scale already implemented optional features variant generated every person planet noticing problem czarnecki pietroszek thaker suggested development type system checks entire code base product line instead individual programs scenario type checker must analyze feature modules product line basis feature model show information type checker ensure every valid program variant generated specifically make following contributions provide condensed version ffj many respects elegant concise predecessor develop formal type system uses information features constraints feature combinations order product line without generating every program prove correctness proving every program generated product line long feature selection satisfies constraints product line furthermore prove completeness proving typedness programs product line guarantees product line welltyped whole offer implementation ffj including proposed type system downloaded evaluation experiments language typing mechanisms work differs many respects previous related work see section comprehensive discussion notably thaker implemented type system product lines conducted several case studies take work formalization correctness completeness proof furthermore work differs many respects previous work modeling related programming mechanisms notably model mechanisms directly ffj syntax semantics without transformation representation stay close syntax contemporary languages tools see section begin brief introduction ffj programs ffj section introduce language ffj originally ffj designed featureoriented programs extend ffj section support product lines support representation multiple alternative program variants time overview ffj ffj lightweight language inspired featherweight java aimed minimality design ffj ffj provides basic constructs like classes fields methods inheritance new constructs capturing core mechanisms programming far ffj type system supported development product lines feature modules written ffj interpreted single program change section ffj program consists set classes refinements refinement extends class introduced previously class refinement associated feature say feature introduces class applies refinement class technically mapping features belong established different ways extending language modules representing features grouping classes refinements belong feature packages directories like class declares superclass may class object refinements defined using keyword refines semantics refinement applied class refinement members added merged member refined class way refinement add new fields methods class override existing methods declared overrides left side figure show excerpt code basic email client called mail lient top feature called ssl bottom ffj feature ssl adds class ssl lines email client code base refines class trans order encrypt outgoing messages lines effect refinement trans adds new field key line overrides method send class trans lines feature mail lient class msg extends object string serialize class trans extends object bool send msg emailclient object feature ssl class ssl extends object trans trans bool send msg refines class trans key key overrides bool send msg return new ssl ssl trans refines trans inherits class refinement msg refinement chain ssl feature fig email client supporting ssl encryption typically programmer applies multiple refinements class composing sequence features called refinement chain refinement applied immediately another refinement chain called predecessor order refinements refinement chain determined composition order right side figure depict refinement inheritance relationships email example fields unique within scope class inheritance hierarchy refinement chain refinement subclass allowed add field already defined predecessor refinement chain superclass example refinement trans would allowed add field key since key introduced refinement feature ssl already methods different refinement subclass may add new methods overloading prohibited override existing methods order distinguish two cases ffj expects programmer declare whether method overrides existing method using modifier overrides example refinement trans feature ssl overrides method send introduced feature ail subclasses similar distinction method introduction overriding allows type system check whether introduced method inadvertently replaces occludes existing method name whether every overriding method proper method overridden apart modifier overrides method ffj similar method method body expression prefixed return sequence statements due functional nature ffj furthermore overloading methods introducing methods equal names different argument types allowed ffj shown figure refinement chains grow left right inheritance hierarchies top bottom looking method body ffj traverses combined inheritance refinement hierarchy object selects body method declaration method refinement compatible kind lookup necessary since model features directly ffj instead generating evaluating code first ffj calculus looks method declaration refinement chain object class starting last refinement back class declaration first body matching method declaration returned method found class refinement chain declaration methods superclass superclass superclass etc searched specific refinement class declaration field lookup works similarly except entire inheritance refinement hierarchy searched fields accumulated list figure illustrate processes method body field lookup schematically object ref ref ref ref ref ref classn ref ref ref fig order method body field lookup ffj syntax ffj detail let explain notational conventions abbreviate lists obvious ways shorthand shorthand shorthand shorthand shorthand note depending context blanks commas semicolons separate elements list context make clear separator meant symbol denotes empty list lists field declarations method declarations parameter names must contain duplicates use metavariables class names field names method names feature names denoted greek letters figure depict syntax ffj extended ffj program consists set class refinement declarations class declaration declares class name inherits superclass consists list fields list method refinement declaration consists list fields list method declarations terms variable field access method invocation new object creation cast class declarations class extends refinement declarations refines class method declarations overrides return values new object creation fig syntax ffj extended bnf method expects list arguments declares body returns single expression type using modifier overrides method declares intends override another method name signature want distinguish methods override others methods override others call former method introductions latter method refinements finally five forms terms variable field access method invocation object creation type cast taken without change values object creations whose arguments values well ffj class table declarations classes refinements looked via class table compiler fills class table parser pass contrast class refinement declarations identified names additionally names enclosing features example order retrieve declaration class trans introduced feature ail example figure write order retrieve refinement class trans applied feature ssl write call qualified type class feature ffj class refinement declarations unique respect qualified types property ensured following sanity conditions feature allowed concept class constructor unnecessary ffj omittance simplifies syntax semantics type rules significantly without loss generality introduce class refinement twice inside single feature module refine class feature introduced common sanity conditions languages tools impose sanity conditions class table inheritance relation class refines class every qualified type dom feature base plays role features object plays classes symbol denoting empty feature lookups terminate dom every class name appearing anywhere dom least one feature inheritance relation contains cycles incl refinement ffj information refinement chain class retrieved using refinement table compiler fills refinement table parser pass yields list features either introduce refine class leftmost element result list feature introduces class left right features listed refine class order composition example figure trans yields list mail lient ssl single sanity condition refinement table every type dom features introduce refine class figure show two functions navigation refinement chain rely function last returns class name qualified type refers feature applies final refinement class class refined refers feature introduces class function pred returns qualified type another qualified type refers feature introduces refines class immediate predecessor refinement chain predecessor returned navigating along refinement chain last pred fig refinement ffj pred subtyping ffj figure show subtype relation ffj subtype relation defined one rule reflexivity transitivity one rule relating type class type immediate superclass necessary define subtyping qualified types classes refinements declare superclasses single declaration per class subtyping class extends fig subtyping ffj auxiliary definitions ffj figure show auxiliary definitions ffj function fields searches refinement chain right left accumulates fields list using comma concatenation operator predecessor refinement chain reached class declaration refinement chain superclass searched see figure reached empty list returned denoted function mbody looks specific refined body method body consists formal parameters method actual term representing content search like fields first refinement chain searched right left superclasses refinement chains searched illustrated figure note overrides means given method declaration may may modifier way able define uniform rules method introduction method refinement function mtype yields signature declaration method lookup like mbody predicate introduce used check whether class introduced multiple features whether field method introduced multiple times class precisely states case classes whether introduced feature whether method field introduced predecessors superclasses evaluate check case classes whether yields class declaration feature different case methods whether mtype yields signature case fields whether defined list fields returned fields predicate refine states whether given refinement proper class declared previously refinement chain predicate override states whether method introduced predecessor whether previous declaration given signature fields field lookup fields class extends fields fields last refines class fields fields pred mbody method body lookup overrides return class extends mbody defined class extends mbody mbody last overrides return refines class mbody defined refines class mbody mbody pred mtype method type lookup return class extends defined class extends mtype mtype last mtype return refines class defined refines class mtype mtype pred mtype introduce valid class introduction class introduce introduce valid field introduction fields introduce introduce valid method introduction dom mtype introduce refine valid class refinement class refine override valid method overriding mtype override fig auxiliary definitions ffj evaluation ffj programs ffj program consists class table term evaluated using evaluation rules shown figure evaluation terminates value term form new reached note use direct semantics class refinement field method lookup mechanisms incorporate refinements class searched fields methods alternative discussed section would flattening semantics merge class preprocessing step refinements single declaration fields last new roj mbody last nvk new new new new ast ield nvk ecv nvk new new ewa ast fig evaluation ffj programs using subtype relation auxiliary functions fields mbody evaluation ffj fairly simple first three rules interesting remaining rules congruence rules rule roj describes projection field instantiated class projected field evaluates value passed argument instantiation function fields used look fields given class receives last argument since want search entire refinement chain class right left figure rule roj nvk evaluates method invocation replacing invocation method body formal parameters method substituted body refinement table relevant evaluation arguments invocation value method invoked substituted function mbody called last refinement class order search refinement chain right left return specific method body figure rule ast evaluates upcast simply removing cast course premise must cast really upcast downcast incorrect cast type checking ffj programs type relation ffj consists type rules terms rules classes refinements methods shown figures term typing fields last mtype last fields last new ield nvk ast stupid warning ast ast fig term typing ffj term typing rules term typing judgment triple consisting typing context term type see figure rule checks whether free variable contained typing context rule ield checks whether field access specifically checks whether declared type whether type equals type entire term rule nvk checks whether method invocation end checks whether arguments invocation subtypes types method typing introduce last class extends return class extends override last overrides return refines class introduce pred return refines class override pred overrides return class typing introduce last introduce class extends refinement typing introduce pred refine refines class fig rules ffj formal parameters whether return type equals type entire term rule checks whether object creation new checks whether arguments instantiation subtypes types fields whether equals type entire term rules ast ast ast check whether casts rule checked whether type term cast subtype supertype unrelated type type whether equals type entire rules figure show ffj rules classes refinements methods typing judgments classes refinements binary relations class refinement declaration feature written rule classes checks whether methods context class qualified type moreover checks whether none fields class declaration introduced multiple times combined inheritance refinement hierarchy whether feature introduces class using introduce rule refinements analogous except rule checks whether corresponding class introduced using refine typing judgment methods binary relation method declaration qualified type declares method written four different rules methods top bottom figure override another method declared classes override another method declared classes override another method declared refinements override another method declared refinements four rules check whether type method body subtype declared return type method declaration methods introduced checked whether method identical name introduced superclass rule predecessor refinement chain rule methods override methods checked whether method identical name signature exists superclass rule predecessor refinement chain rule ffj programs finally ffj program consisting term class table refinement table term checked using ffj term typing rules classes refinements stored class table checked using ffj rules class refinement tables ensured corresponding sanity conditions rule ast needed small step semantics ffj order able formulate prove type preservation property ffj programs whose type derivation contains rule premise stupid warning appears derivation considered type soundness ffj type system ffj sound prove using standard theorems preservation progress heorem preservation heorem progress suppose term includes new subterm fields last includes new subterm mbody last provide proofs two theorems appendix product lines ffjpl section goal define type system product lines type system checks whether valid combinations features yield programs scenario features question may optional mutually exclusive different combinations possible form different programs since may plenty valid combinations type checking individually usually feasible order provide type system product lines need information combinations features valid features mandatory optional mutually exclusive need adapt subtype type rules ffj check lead terms type system guarantees every program derived product line ffj program ffj together type system checking featureoriented product lines henceforth called ffjpl overview product lines product line made set feature modules feature model feature modules contains features implementation feature model describes feature modules combined contrast featureoriented programs section typically features optional mutually exclusive also relations disjunction negation implication possible broken mandatory optional mutually exclusive features generally derivation step user selects valid subset features subsequently program derived case derivation means assembling corresponding feature modules given set features figure illustrate process program derivation typically wide variety programs derived product line challenge define type system guarantees basis feature modules feature model valid programs program derived product line sure evaluate using standard evaluation rules ffj see section product line programs feature modules program program derivation user feature selection program program feature model fig process deriving programs product line managing variability feature models aim developing product line manage variability set programs developed particular domain facilitate reuse feature implementations among programs domain feature model captures variability explicitly implicitly defining ordered set features product line legal feature combinations feature order essential field method lookup see section different approaches product line engineering use different representations feature models define legal feature combinations simplest approach enumerate legal feature combinations practice commonly different flavors tree structures used sometimes combination additional propositional constraints define legal combinations illustrated figure purpose actual representation legal feature combinations relevant ffjpl use feature model check whether feature specific program elements present certain circumstances design decision ffjpl abstract concrete representation underlying feature model rather provide interface feature model benefits need struggle details formalization feature models well understood researchers outside scope paper able support different kinds feature model representations tree structures grammars propositional formulas interface feature model simply set functions predicates use ask questions like may may feature present together feature program element present every variant also feature present program element always reachable feature challenges type checking let explain challenges type checking extending email example shown figure suppose basic email client refined process incoming text messages feature ext lines optionally enabled process html messages using either mozilla rendering engine feature ozilla lines safari rendering engine feature afari lines end features ozilla afari override method render class display line order invoke respective rendering engines field renderer lines instead text printing function line feature ext refines class trans unit receive msg msg return something new display msg class display unit render msg msg display message text format feature ozilla refines class display mozillarenderer renderer overrides unit render msg render html message using mozilla engine feature afari refines class display safarirenderer renderer overrides unit render msg render html message using safari engine fig email client using mozilla safari rendering engines first thing observe features ozilla afari rely class display method render introduced feature ext order guarantee every derived program type system checks whether display render always reachable features ozilla afari whether every program variant contains ozilla afari also feature ext present second thing observe features ozilla afari add field renderer display lines different types ffj program feature modules would program field renderer introduced twice however figure intended represent single program product line features ozilla afari mutually exclusive defined product line feature model stated earlier type system take fact account let summarize key challenges type checking product lines global class table contains classes refinements features product line even features optional mutually exclusive present derived programs single class introduced multiple features long features mutually exclusive also case multiple introductions methods fields may even different types presence types fields methods depends presence features introduce reference elements feature type field projection method invocation valid referenced element always reachable referring feature every variant contains referring feature like references extension program element class method refinement valid extended program element always reachable feature applies refinement refinements classes methods necessarily form linear refinement chains may alternative refinements single class method exclude one another explained collecting information feature modules type checking ffjpl compiler collects various information feature modules product line actual type checking performed compiler fills three tables information class table introduction table refinement table class table ffjpl like one ffj satisfy sanity conditions except may multiple declarations class field method long defined mutually exclusive features may cycles inheritance hierarchy cycles set classes reachable given feature introduction table maps type list mutually exclusive features introduce type features returned listed order prescribed feature model example figure call display would return list consisting single feature ext likewise introduction table maps field method names combination declaring classes features example call would return list ozilla afari sanity conditions introduction table straightforward every type dom features introduce class every field contained class dom features introduce field every method contained class dom features introduce method much like ffj ffjpl refinement table call yields list features either introduce refine class different introduction table returns features introduce class features returned listed order prescribed feature model sanity condition ffjpl refinement table identical one ffj namely every type dom features introduce refine class feature model interface said ffjpl abstract concrete representation feature model define instead interface consisting proper functions predicates two kinds questions want ask feature model explain next first would like know features never present together features sometimes present together features always present together end define two predicates never sometimes function always predicate never indicates feature never reachable context valid program variant features feature present together predicate sometimes indicates feature sometimes present features present variants features feature present together variants present together function always used evaluate whether feature always present context either alone within group alternative features three cases feature always present context always returns feature always feature always present would together certain group mutually exclusive features one group always present always returns features group always feature present neither alone together mutually exclusive features always returns empty list always predicates function provide information need know features relationships used especially field method lookup second would like know whether specific program element always present given set features present necessary ensure references program elements always valid dangling need two sources information first need know features introduce program element question determined using introduction table second need know combinations features legal determined using feature model field renderer example introduction table would yield features ozilla afari feature model follows ozilla fari mutually exclusive never ozilla afari happen none two features present invalidate reference field type system needs know situation end introduce predicate validref expresses program element always reachable set features example validref holds type always reachable context validref holds field class always reachable context validref holds method class always reachable context applying validref list program elements means conjunction predicates every list element taken finally write validref mean program element always reachable context subset features product line prototype implemented functions predicates using sat solver reasons propositional formulas representing constraints legal feature combinations see section proposed batory czarnecki pietroszek refinement ffjpl figure show functions last pred navigation along refinement chain two functions identical ones ffj figure however ffjpl may alternative declarations class refinement chain refinement declarations may even precede class declarations long declaring features mutually exclusive let illustrate refinement ffjpl means example shown figure class introduced features feature refines class introduced feature feature refines class introduced feature feature never present feature present vice versa call would return list call last would return qualified type call pred would return qualified type navigating along refinement chain last pred pred fig refinement ffjpl mutually exclusive fig multiple alternative refinements subtyping ffjpl subtype relation complicated ffjpl ffj reason class may multiple declarations different features declaring possibly different superclasses illustrated figure checking whether class subtype another class need check whether subtype relation holds alternative inheritance paths may reached given context example foobar subtype barfoo barfoo superclass foobar every program variant since always foobar subtype foo bar cases program variant exists foobar indirect subclass class question foo bar barfoo barfoo mutually exclusive one always present together foobar fig multiple inheritance chains presence alternative features figure show subtype relation ffjpl subtype relation read follows context type subtype type type subtype type every variant also features present first rule figure covers reflexivity terminates recursion inheritance hierarchy second rule states class subtype class least one declaration always present tested validref every declarations may present together tested sometimes declares type supertype subtype context must direct indirect supertype variants features present additionally supertype must always reachable context traversing inheritance hierarchy step context extended feature introduces current class question extended interestingly second rule subsumes two ffj rules transitivity direct superclass declaration declarations may declare directly superclass declarations may declare another superclass turn subtype rule must applicable cases simultaneously subtyping validref class extends sometimes validref fig subtyping ffjpl applied example figure foobar foobar reflexivity rule also foobar barfoo foobar reachable feature every feature introduces foobar namely contains corresponding class declaration declares barfoo foobar superclass barfoo always reachable however foobar foo foobar bar foobar immediate superclass barfoo always subtype foo respectively bar auxiliary definitions ffjpl extending ffj toward ffjpl makes necessary add modify auxiliary functions complex changes concern field method lookup mechanisms field lookup auxiliary function fields collects fields class including fields superclasses refinements since alternative class refinement declarations may introduce alternative fields field identical alternative types fields may return different fields different feature selections since want valid variants field returns multiple field lists list lists cover possible feature selections inner list contains field declarations collected alternative path combined inheritance refinement hierarchy legibility separate inner lists using delimiter example looking fields class foobar context feature figure yields list features mutually exclusive one present variant also present readability use metavariables referring inner field lists abbreviate list lists fields analogously shorthand fnm function fields receives qualified type context selected features want possible field lists context empty want field lists subset feature selections fields referenced term specific feature module use context specify one features know must selected basic idea ffjpl field lookup traverse combined inheritance refinement hierarchy much like ffj four situations handled differently field lookup returns empty list reaches field lookup ignores fields introduced features never present given context field lookup collects fields introduced features always present given context references fields always valid field lookup collects fields introduced features may present given context always present case special marker added fields question guarantee reference field safe given type system decide based marker whether situation may provoke error type system ignores marker looking duplicate fields reports error type checking object creations special situation occurs field lookup identifies group alternative features group feature optional excludes every feature group least one feature group always present given context field lookup identifies group alternative features split result list list containing fields feature group fields original list fields field lookup fields never fields fields pred sometimes always class extends fields append fields last sometimes always refines class fields append fields pred sometimes always class extends fields append fields last sometimes always refines class fields append fields pred sometimes always fields fields fields fig field lookup ffjpl order distinguish different cases use predicates functions defined section especially never sometimes always definition note marker generated type checking include syntax ffj tion fields shown figure follows intuition described reached recursion terminates feature never reachable given context fields ignores feature resumes previous one feature mandatory always present given context fields question added alternative result list created rule feature optional fields question annotated marker added alternative result list feature part alternative group features immediately decide proceed split result list multiple lists means multiple recursive invocations fields add one alternative features context passed invocation fields mtype method type lookup mtype sometimes class extends mtype mtype pred mtype last sometimes refines class mtype mtype pred defined never class extends mtype mtype pred mtype last defined never refines class mtype mtype pred fig method lookup ffjpl method type lookup like field lookup method lookup take alternative definitions methods account lookup mechanism simpler fields order signatures found combined inheritance refinement hierarchy irrelevant type checking hence function mtype yields simple list signatures given method name example calling mtype context figure yields list function append adds inner list list field lists given field implementation straightforward omitted brevity figure show definition function mtype empty list returned class sometimes reachable introduces method question signature added result list possible predecessors refinement chain using pred possible subclasses searched using last likewise refinement sometimes reachable introduces method name searched signature added result list possible predecessors refinement chain searched using pred class refinement declare corresponding method class never reachable search proceeds possible superclasses predecessors current definition function mtype returns possibly many duplicate signatures straightforward optimization would remove duplicates using result list omitted simplicity introduce valid class introduction class extends sometimes introduce introduce valid field introduction fields introduce introduce valid method introduction mtype introduce refine valid class refinement validref refine valid method overriding override validref mtype override fig valid introduction refinement overriding ffjpl valid introduction refinement overriding figure show predicates checking validity introduction refinement overriding ffjpl predicate introduce indicates whether class qualified type introduced feature may present context likewise introduce holds method field introduced qualified type including possible predecessors superclasses may present given context end checks either whether mtype yields empty list whether contained every inner list returned fields given refinement predicate refine indicates whether proper class always reachable given context declared previously refinement chain write validref order state declaration class introduced set features subset features product line namely features precede feature introduces class predicate override indicates whether declaration method introduced always reachable feature introduced feature refines whether every possible declaration predecessor signature type relation ffjpl type relation ffjpl consists type rules terms rules classes refinements methods shown figure figure term typing rules term typing judgment ffjpl quadruple consisting typing context term list types feature contains term see figure term multiple types product line may multiple declarations classes fields methods list contains possible types term rule standard refer feature model yields list consisting type variable question rule ieldpl checks whether field access every possible variant also present based possible types term field accessed rule checks whether always reachable using validref note key mechanism ffjpl type system ensures field accessed definitely present every valid program variant field access occurs without generating variants furthermore possible fields possible types assembled nested list denotes declaration field call fields last shorthand fields last fields last individual result lists concatenated finally list possible types cnm field becomes list types overall field access note result list may contain duplicates could eliminated optimization purposes rule nvkpl checks whether method invocation every possible variant also present based possible types term method invoked rule checks whether always reachable term typing validref fields last cnm validref mtype last bnm validref fields last new ieldpl validref validref stupid warning fig term typing ffjpl nvkpl ewpl astpl astpl method typing introduce last validref class extends return validref override last class extends overrides return validref introduce pred refines class return override pred validref refines class overrides return class typing validref introduce last validref introduce class extends refinement typing validref introduce pred refine refines class fig rules ffjpl using validref field access check essential ensures generated programs methods invoked also present furthermore possible signatures possible types assembled nested list checked possible lists argument types method invocation subtypes possible lists parameter types method implies lengths two lists must equal method invocation multiple types assembled list contains result types method determined mtype field access duplicates eliminated optimization purposes rule ewpl checks whether object creation new every possible variant also present specifically checks whether declaration class always reachable furthermore possible field combinations assembled nested list checked whether possible combinations argument types passed object creation subtypes types possible field combinations implies number arguments types must equal number field types fields result list must annotated marker since optional fields may present every variant references may become invalid see field lookup object creation single type rules astpl astpl check whether casts every possible variant also present done checking whether type term cast always reachable whether type subtype supertype unrelated type possible types term single rule astpl downcasts list possible types may contain subtypes simultaneously type list leads stupid case flag stupid warning cast yields list containing single type rules figure show rules classes refinements methods like ffj typing judgment classes refinements binary relation class refinement declaration feature rule classes checks whether methods context class qualified type moreover checks whether class declaration unique scope enclosing feature whether feature may present together feature introduces class identical name using introduce furthermore checks whether superclass field types always reachable using validref finally checks whether none fields class declaration introduced using introduce rule refinements analogous except rule checks least one class declaration reachable refined introduced refinement using refine typing judgment methods binary relation method declaration qualified type declares method like ffj four different rules methods top bottom figure override another method declared classes treatment semiformal simplifies rule override another method declared classes override another method declared refinements override another method declared refinements four rules check whether possible types method body subtypes declared return type method whether argument types always reachable enclosing feature using validref methods introduced checked using introduce whether method identical name introduced possible superclass rule possible predecessor refinement chain rule methods override methods checked using override whether method identical name signature exists possible superclass rule possible predecessor refinement chain rule ffjpl product lines ffjpl product line consisting term class table introduction table refinement table term checked using ffjpl term typing rules classes refinements stored class table checked using ffjpl rules class introduction refinement tables ensured corresponding sanity conditions type safety ffjpl type checking ffjpl based information contained class table introduction table refinement table feature model first three filled compiler parsed code base product line feature model supplied directly user tool compiler determines class refinement declarations belong features classes refinements class table checked using rules turn use rules methods term typing rules method bodies several rules use introduction refinement tables order map types fields methods features feature model navigate along refinement chains check presence program elements type safety mean context product line product line never evaluated rather different programs derived evaluated hence property interested programs derived welltyped product line turn furthermore would like sure ffjpl product lines ffj programs derived formulate two properties two theorems correctness ffjpl completeness ffjpl correctness heorem correctness ffjpl given ffjpl product line including term class introduction refinement tables feature model every program derived valid feature selection ffj program figure derive valid function derive collects feature modules product line according user selection feature modules removed derived program derivation step class table contains classes refinements stemming selected feature modules define valid feature selection list features whose combination contradict constraints implied feature model proof idea show type derivation tree ffjpl product line superimposition multiple type derivation slices usual type derivation proceeds root initial type rule checks term classes refinements class table leaves type rules premise type derivation tree time term multiple types method different alternative return types caused multiple mutually exclusive method declarations type derivation splits multiple branches branch refer positions type derivation tree split multiple subtrees order type check multiple mutually exclusive term definitions subtree root type derivation tree along branches toward leaf type derivation slice slice corresponds type derivation program let illustrate concept type derivation slice simplified example suppose application arbitrary type rule term somewhere type derivation term multiple types due different alternative definitions subterms simplicity assume single subterm like case field access overall term multiple types depending types rule easily extended multiple subterms adding predicate per subterm type rule ensures possible variants basis variants subterm furthermore type rule checks whether predicate predicate holds variant subterm possible types written predicate possible types overall term follow way possible types subterm predicate validref used check whether referenced elements types present valid variants including different combinations optional features general case written follows predicate predicate always predicate different uses predicate premise ffjpl type rule correspond branches type derivation denote alternative definitions subterms hence premise ffjpl type rule conjunction different premises cover different alternative definitions subterms term proof strategy follows assuming ffjpl type system ensures slice valid ffj type derivation see lemma appendix valid feature selection corresponds single slice since alternative features removed see lemma appendix program corresponds valid feature selection guaranteed note multiple valid feature selections may correspond slice presence optional features follows every valid feature selection derive wellformed ffj program since type derivation valid whose evaluation satisfies properties progress preservation see appendix appendix describe proof theorem detail completeness heorem completeness ffjpl given ffjpl product line including term class introduction refinement tables feature model given valid feature selections yield ffj programs according theorem product line according rules ffjpl valid derive proof idea examine three basic cases generalize subsequently mandatory features mandatory features except single optional feature mandatory features except two mutually exclusive features cases formulated combinations three basic cases end divide possible relations features three disjoint sets feature reachable another feature variants feature reachable another feature variants two features mutually exclusive three possible relations prove three basic cases isolation subsequently construct general case phrased combination three basic cases description general case reduction finish proof theorem appendix describe proof theorem detail implementation discussion implemented ffj ffjpl haskell including program evaluation type checking product lines ffjpl compiler expects set feature modules feature model together represent product line feature module represented directory files found inside feature module directory assigned belong enclosing feature ffjpl compiler stores information type checking file may contain multiple classes class refinements figure show snapshot test environment based eclipse haskell use eclipse interpret compile ffj ffjpl type systems interpreters specifically figure shows directory structure email system file contains user feature selection feature model product line fig snapshot test environment haskell implementation feature model product line represented propositional formula following approach batory czarnecki pietroszek propositional formulas effective way representing relationships features specifying feature implies presence absence features machine checking whether feature selection valid example implemented predicate sometimes follows sometimes satisfiable feature model propositional formula feature variables satisfiable satisfiability solver likewise implemented predicate always basis logical reasoning propositional formulas always satisfiable detailed explanation propositional formulas relate feature models feature selections refer interest work batory figure show textual specification feature model email system passed directly ffjpl compiler http features emailclient imap mime ssl text mozilla safari model emailclient implies imap imap implies emailclient implies emailclient mime implies emailclient ssl implies emailclient text implies imap mozilla implies imap safari implies imap mozilla implies safari safari implies mozilla fig feature model email client product line first section features file representing feature model defines ordered set names features product line second section model defines constraints features presence derived programs example email client supports either protocols imap furthermore every feature requires presence base feature mail lient feature ext requires either presence imap ozilla afari finally feature ozilla requires absence feature afari vice versa basis feature modules feature model ffjpl type system checks entire product line identifies valid program variants still contain type errors sat solver used check whether elements never sometimes always reachable error found product line rejected program guaranteed derived basis user feature selection program evaluated using standard evaluation rules ffj also implemented haskell contrast previous work type checking product lines type system provides detailed error messages possible due finegrained checks level individual term typing rules example field access succeeds program variants fact reported user error message point erroneous field access previously proposed type systems compose code features product line extract single propositional formula checked satisfiability formula satisfiable type error occurred possible identify location caused error least without information see section detailed discussion related approaches made several tests experiments haskell implementation however tests feasible two reasons first previous work already demonstrated product lines require proper type systems type checking entire product lines feasible useful second like ffj core language java programs compiled relative simplicity suited formal definition proof language properties case type system correctness completeness core language never suited development programs examples test programs similar size complexity examples pierce type checking test programs required acceptable amounts time order magnitude milliseconds per product line claim able handle product lines ffjpl rather would require expansion type system full java including support features provided ahead featurehouse enticing goal one future especially java informal language specification pages work lays foundation implementing type systems provides evidence core mechanisms type sound type systems product lines implemented correctly completely still would like make predictions scalability approach novelty type system incorporates alternative features consequently alternative definitions classes fields methods leads type derivation tree possibly multiple branches denoting alternative term types hence performing type derivation product line many alternative features may consume significant amount computation time memory seems overhead price allowing alternative implementation program parts nevertheless approach minimizes overhead caused alternative features compared naive approach naive approach possible programs derived type checked subsequently approach type check entire code base product line branch type derivation terms really multiple alternative types level entire program variants done naive approach experience product lines shows usually many alternative features product line mostly optional features example berkeley product line edition lines code feature modules two pairs alternative graph product line feature modules three pairs alternative observation alternative features encountered alter types multiple definitions fields methods equal types example gpl berkeley contain alternative definitions methods identical signatures type checking product lines approach type derivation would almost branches naive approach still many program variants exist due optional features hence approach preferable example product line features variants constant approach type system would check feature modules branches type derivation solving simple sat problems see naive approach type system would check least feature modules commonly product lines higher degree variability even variants benefit approach becomes even significant believe benefit make difference real world product line engineering point almost typing rules contain calls sat solver results possibly many invocations sat solver type checking time determining satisfiability propositional formula general problem however shown structures propositional formulas occurring software product lines simple enough scale satisfiability solving thousands features furthermore experiments observed many calls sat solver redundant easy see thinking type checking product lines presence single types members checked many type rules implemented caching mechanism decrease number calls sat solver minimum finally implementation haskell helped lot evaluation correctness type rules serve researchers reproduce evaluate work experiment language mechanisms implementations ffj ffjpl along test programs downloaded related work divide discussions related work two parts implementation formal models type systems programs feature oriented product lines programs ffj inspired several languages tools notably featurehouse prehofer java extension key aim separate implementation software artifacts classes methods definition features classes refinements annotated declared belong feature statement program text defines explicitly connection code features instead mapping software artifacts features established via containment hierarchies basically directories containing software artifacts advantage approach feature implementation include beside classes form java files also supporting documents documentation form html files grammar specifications form javacc files build scripts deployment descriptors form xml files end feature composition merges classes refinements also artifacts html xml files respective refinements another class programming languages provide mechanisms definition extension classes class hierarchies includes contextl scala difference languages provide explicit language constructs aggregating classes belong feature family classes classboxes layers implies software artifacts included feature however ffj still models subset languages particular class refinement similarly related work formalization key concepts underlying featureoriented programming disassociated concept feature level http code especially calculi mixins traits family polymorphism virtual classes types open classes dependent classes nested inheritance either support refinement single classes expect classes form semantically coherent unit belong feature located physical module defined host programming language example virtual class definition inner class enclosing object classbox package aggregates set related classes thus ffj differs previous approaches relies contextual information collected compiler features composition order mapping code features different line research aims reasoning features calculus gdeep closely related ffj since provides type system languages idea recursive process merging software artifacts composing hierarchically structured features similar different host languages java xml calculus describes formally feature composition performed type constraints satisfied contrast ffj aspire languageindependent although key concepts certainly used different languages advantage ffj type system used check whether terms host language java violate principles feature orientation whether methods refer classes added features due language independence gdeep enough information perform checks product lines work type checking product lines motivated work thaker suggested development type system featureoriented product lines check individual programs individual feature implementations implemented incomplete type system number case studies real product lines found numerous hidden errors using type rules nevertheless implementation type system sense described informally provide correctness completeness proof type system inspired work able provide formalization proof type safety parallel line work delaware developed formal model language called lightweight feature java lfj type system product lines work also influenced practical work thaker surprising closest however numerous differences first formal model language based lightweight java featherweight java expressive also complex decided simpler variant omitting constructors mutable state second delaware model featureoriented mechanisms class method refinements directly semantics type rules language instead introduce transformation step lfj code compiled code flatten refinement chains single classes proceeding likewise would generate first program ffj product line type check program consists possible features product line subsequently refrained transformation step order model semantics mechanisms directly terms dedicated field method lookup mechanisms well special rules method class refinements lagorio shown flattening semantics direct semantics equivalent advantage direct semantics allows type checking error reporting finer grain lfj feature modules composed single propositional formula generated tested satisfiability formula satisfiable difficult identify precisely point failure ffjpl individual type rules consult feature model point directly point failure advantage approach leaves open feature composition performed currently feature composition modeled static process done compilation approach becomes possible model dynamic feature composition run time making class feature tables feature model dynamic allowing change computation lfj possible hutchins shown feature composition performed interpreter partial evaluation used parts composition static however delaware developed machinechecked model type system formalized theorem prover coq proof haskell implementation ffj ffjpl calculi tested thoroughly even previously work thaker czarnecki presented automatic verification procedure ensuring uml model template instances generated valid feature selection type check product lines consist java programs uml models use ocl object constraint language constraints express implement type system model composition sense aim similar ffjpl limited model artifacts although proposed generalize work programming languages implemented tool called cide allows developer decompose software system features via annotations contrast languages tools link code features established via annotations user selects set features code annotated features using background colors present selection removed developed formal calculus set type rules ensure welltyped programs generated valid feature selection example method declaration removed remaining code must contain calls method cide type rules related type rules ffjpl far mutually exclusive features supported cide sense ffjpl cide represent two sides coin former aims composition feature modules latter annotation code conclusion product line imposes severe challenges type checking naive approach checking individual programs product line feasible combinatorial explosion program variants hence practical option check entire code base product line including features based information feature combinations valid ensure possible derive valid program variant contains type errors developed type system based formal model featureoriented language called feature featherweight java ffj distinguishing property work modeled semantics type rules core mechanisms directly without compiling code representation java code direct semantics allows reason core mechanisms terms generated code advantage error reporting time feature composition may vary compile time run time demonstrated proved based valid feature selection type system ensures every program product line type system complete implementation ffj including type system product lines indicates feasibility approach serve testbed experimenting mechanisms acknowledgment work funded part german research foundation dfg project number references ancona lagorio zucca java extension mixins acm transactions programming languages systems toplas anfurrutia trujillo refining xml artifacts proceedings international conference web engineering icwe volume lncs pages apel towards development ubiquitous middleware product lines software engineering middleware volume lncs pages springerverlag apel hutchins overview gdeep calculus technical report department informatics mathematics university passau apel janda trujillo model superimposition software product lines proceedings international conference model transformation icmt volume lncs pages apel lengauer feature composition functional programming proceedings international conference software composition volume lncs pages apel lengauer overview feature featherweight java technical report department informatics mathematics university passau apel lengauer feature featherweight java calculus featureoriented programming stepwise refinement proceedings international conference generative programming component engineering gpce pages acm press apel lengauer featurehouse automated software composition proceedings international conference software engineering icse pages ieee press apel leich saake symbiosis featureoriented programming proceedings international conference generative programming component engineering gpce volume lncs pages apel leich saake aspectual feature modules ieee transactions software engineering tse batory feature models grammars propositional formulas proceedings international software product line conference splc volume lncs pages batory sarvela rauschmayer scaling refinement ieee transactions software engineering tse bergel ducasse nierstrasz controlling scope change java proceedings international conference programming systems languages applications oopsla pages acm press bertot casteran interactive theorem proving program development coq art calculus inductive constructions texts theoretical computer science eatcs series bono patel shmatikov core calculus classes mixins proceedings european conference programming ecoop volume lncs pages bracha cook inheritance proceedings european conference programming ecoop international conference objectoriented programming systems languages applications oopsla pages acm press clarke drossopoulou noble wrigstad tribe simple virtual class calculus proceedings international conference software development aosd pages acm press clements northrop software product lines practices patterns addisonwesley clifton millstein leavens chambers multijava design rationale compiler implementation applications acm transactions programming languages systems toplas czarnecki eisenecker generative programming methods tools applications czarnecki pietroszek verifying model templates wellformedness ocl constraints proceedings international conference generative programming component engineering gpce pages acm press delaware cook batory model safe composition proceedings international workshop foundations languages foal pages acm press ducasse nierstrasz wuyts black traits mechanism finegrained reuse acm transactions programming languages systems toplas ernst ostermann cook virtual class calculus proceedings international symposium principles programming languages popl pages acm press flatt krishnamurthi felleisen classes mixins proceedings international symposium principles programming languages popl pages acm press gasiunas mezini ostermann dependent classes proceedings international conference programming systems languages applications oopsla pages acm press gosling joy steele bracha java language specification java series edition hirschfeld costanza nierstrasz programming journal object technology jot hutchins eliminating distinctions class using prototypes model virtual classes proceedings international conference programming systems languages applications oopsla pages acm press hutchins pure subtype systems type theory extensible software phd thesis school informatics university edinburgh igarashi pierce wadler featherweight java minimal core calculus java acm transactions programming languages systems toplas igarashi saito viroli lightweight family polymorphism proceedings asian symposium programming languages systems aplas volume lncs pages kamina tamai mcjava design implementation java proceedings asian symposium programming languages systems aplas volume lncs pages kang cohen hess novak peterson domain analysis foda feasibility study technical report software engineering institute carnegie mellon university apel software product lines formal approach proceedings international conference automated software engineering ase pages ieee press apel batory case study implementing features using aspectj proceedings international software product line conference splc pages ieee press apel kuhlemann granularity software product lines proceedings international conference software engineering icse pages acm press apel trujillo kuhlemann batory guaranteeing syntactic correctness product line variants approach proceedings international conference objects models components patterns tools europe volume lnbi pages lagorio servetto zucca featherweight jigsaw minimal core calculus modular composition classes proceedings european conference objectoriented programming ecoop lncs liquori spiwack feathertrait modest extension featherweight java acm transactions programming languages systems toplas batory standard problem evaluating methodologies proceedings international conference generative componentbased software engineering gcse volume lncs pages batory cook evaluating support features advanced modularization technologies proceedings european conference objectoriented programming ecoop volume lncs pages batory lengauer disciplined approach aspect composition proceedings international symposium partial evaluation semanticsbased program manipulation pepm pages acm press madsen virtual classes powerful mechanism objectoriented programming proceedings international conference programming systems languages applications oopsla pages acm press masuhara kiczales modeling crosscutting mechanisms proceedings european conference programming ecoop volume lncs pages mendonca wasowski czarnecki analysis feature models easy proceedings international software product line conference splc software engineering institute carnegie mellon university mezini ostermann variability management programming aspects proceedings international symposium foundations software engineering fse pages acm press murphy lai walker robillard separating features source code exploratory study proceedings international conference software engineering icse pages ieee press nystrom chong myers scalable extensibility via nested inheritance proceedings international conference programming systems languages applications oopsla pages acm press odersky cremet zenger nominal theory objects dependent types proceedings european conference programming ecoop volume lncs pages odersky zenger scalable component abstractions proceedings international conference programming systems languages applications oopsla pages acm press ostermann dynamically composable collaborations delegation layers proceedings european conference programming ecoop volume lncs pages pierce types programming languages mit press prehofer programming fresh look objects proceedings european conference programming ecoop volume lncs pages reenskaug andersen berre hurlen landmark lehne nordhagen oftedal skaar stenslet oorass seamless support creation maintenance systems journal programming joop siegmund sunkle apel leich saake sql carte toward data management datenbanksysteme business technologie und web fachtagung des datenbanken und informationssysteme volume lni pages gesellschaft informatik siegmund saake apel code generation support static dynamic composition software product lines proceedings international conference generative programming component engineering gpce pages acm press siegmund schirmeier sincero apel leich spinczyk saake data management solutions embedded systems proceedings edbt workshop software engineering data management setmdm pages acm press siegmund heidenreich apel saake bridging gap variability client application database schema datenbanksysteme business technologie und web fachtagung des datenbanken und informationssysteme volume lni pages gesellschaft informatik smaragdakis batory mixin layers implementation technique refinements designs acm transactions software engineering methodology tosem sewell parkinson java module system core design semantic definition proceedings international conference programming systems languages applications oopsla pages acm press tarr ossher harrison sutton degrees separation multidimensional separation concerns proceedings international conference software engineering icse pages ieee press thaker batory kitchin cook safe composition product lines proceedings international conference generative programming component engineering gpce pages acm press vanhilst notkin using role components implement designs proceedings international conference programming systems languages applications oopsla pages acm press wright felleisen syntactic approach type soundness information computation type soundness proof ffj giving main proof state proof required lemmas emma mtype last mtype last proof straightforward induction derivation two cases first method defined declaration refinement class mtype last mtype last class extends follows definition mtype searches refinement chain right left declared refinement chain second defined declaration refinement class mtype last also mtype last class extends case covered rules methods use predicate override ensure properly overridden signatures overridden overriding declaration equal introduced twice overloading allowed ffj emma term substitution preserves typing proof induction derivation ase result trivial since hand since letting finishes case ase ield fields last induction hypothesis easy check fields last fields last therefore ield fact refinements class may add new fields cause problems contains fields including refinements add ase nvk mtype last induction hypothesis lemma mtype last moreover transitivity therefore nvk key subclasses refinements may override methods rules methods ensure method type altered overloading ffj ase new fields last induction hypothesis transitivity therefore rule new although refinements class may add new fields rule ensures arguments object creation match overall fields including refinements number types number arguments equals number fields function fields returns ase ast induction hypothesis transitivity yields ast ase ast note abbreviation means occurrences variables term substituted corresponsing terms induction hypothesis ast ast respectively stupid warning ast ase ast induction hypothesis means ffj class one superclass either contradicts induction hypothesis stupid warning ast emma weakening proof straightforward induction proof ffj similar proof emma mtype last mbody last proof induction derivation mbody last base case defined specific refinement easy since defined last class table implies must derived rules methods induction step also straightforward defined last mbody searches refinement chain right left found superclass refinement chain searched two subcases first defined declaration refinement case similar base case second defined superclass one refinements case class table implies must derived wellformedness rules methods finishes case note lemma holds method refinements change types arguments result method overloading allowed points always class introduced refined heorem preservation proof induction derivation case analysis final rule ase roj new fields last shape see final rule derivation must ield premise new similarly last rule derivation new must premises particular finishes case since ase nvk new mbody last new final rules derivation must nvk premises new mtype last lemma lemma lemma new transitivity obtain letting completes case ase ast new new proof new must end ast since ending tsc ast ast would contradict assumption premises ast give new finishing case cases congruence rules easy show case ast ase ast three subcases according last typing rule used ubcase ast induction hypothesis transitivity therefore ast additional stupid warning ubcase ast induction hypothesis ast ast without additional stupid warning hand stupid warning ast ubcase ast induction hypothesis also therefore stupid warning since therefore stupid war ning ast additional stupid warning subcase analogous case ast proof lemma heorem progress suppose term includes new subterm fields last includes new subterm mbody last proof new subterm subterm easy check fields last appears fact refinements may add fields defined already invalidate conclusion note every field class including superclasses refinements must proper argument similarly new subterm also easy show mbody last fact mtype last conclusion holds ffj since method refinement must signature method refined overloading allowed heorem type soundness ffj normal form either value term containing new proof immediate theorem nothing changes proof theorem ffj compared type soundness proof ffjpl section provide proof sketches theorems correctness ffjpl completeness ffjpl formalization would desirable stopped point often case formal systems formal precision legibility decided development proof strategies best fit purposes correctness heorem correctness ffjpl given ffjpl product line including term class introduction refinement tables feature model every program derived valid feature selection ffj program figure derive valid proof strategy follows assuming ffjpl type system ensures slice valid ffj type derivation lemma valid feature selection corresponds single slice lemma follows corresponding program prove theorem develop two required lemmas cover two assumptions proof strategy emma given ffjpl product line every slice product line type derivation corresponds set valid type derivation ffj proof proof sketch given ffjpl product line corresponding type derivation consists possibly multiple slices basic case easy simple derivation without branches due mutually exclusive features optional features may present case term single type one would also determined ffj furthermore ffjpl guarantees referenced types methods fields present valid variants using predicate validref let illustrate rule ieldpl rules analogous validref fields last cnm ieldpl basic case branches type derivation thus term single type reason fields returns simple list fields contains declaration field finally ieldpl checks whether declaration present valid variants using validref hence basic case ffjpl derivation ends rule ieldpl equivalent set corresponding ffj derivations contain alternative optional features thus single type fields returns simple list fields contains declaration declaration present reason ffjpl derivation without mutually exclusive features single slice corresponds multiple ffj derivations ffjpl derivation may contain optional features whose different combinations correspond different ffj derivations using predicate validref type rules ffjpl ensure possible combinations optional features welltyped case multiple slices ffjpl derivation term may multiple types type rules ffjpl make sure every possible shape given term possible type term leads branch derivation tree premise ieldpl checks whether possible shapes given term taking conjunction branches derivation hence ieldpl successful individual branch holds slice corresponds ffj program ensuring presence optional features relevant subterms referenced elements present valid variants slice covers set ffj derivations correspond different combinations optional features like basic case example field projection subterm multiple types types fields yields possible combinations fields declared variants types checked whether type subterm combination fields contains proper declaration field different types become possible types overall field projection term like basic case checked whether every possible type present valid variants using validref slice corresponds valid ffj derivation whole set derivations covering different combinations optional features emma given ffjpl product line valid feature selection corresponds single slice corresponding type derivation proof proof sketch definition valid feature selection contain mutually exclusive features considering single valid feature selection term single type type derivation overall product line contains branches corresponding alternative types terms successive removal mutually exclusive features removes branches single branch remains consequently valid feature selection corresponds single slice proof proof sketch theorem correctness ffjpl fact ffjpl type system ensures slice valid ffj type derivation lemma valid feature selection corresponds single slice lemma implies program corresponds valid feature selection completeness heorem completeness ffjpl given ffjpl product line including term class introduction refinement tables feature model given valid feature selections yield ffj programs according theorem product line according rules ffjpl valid derive proof proof sketch theorem completeness ffjpl three basic cases mandatory features mandatory features except single optional feature mandatory features except two mutually exclusive features proving theorem first basic case trivial since mandatory features exist single ffj program derived product line ffj program product line elements always reachable term single type fact type rules ffjpl ffj become equivalent case second basic case two ffj programs derived product line one including one excluding optional feature difference two programs content optional feature feature add new classes refine existing classes new methods fields refine existing methods overriding two programs overall product line well since reachability checks succeed every type rule ffjpl otherwise least one two programs would since case reachability checks difference ffjpl ffj type rules first case term single type since mutually exclusive features fact two programs implies elements reachable type derivations two ffj programs thus reachability checks ffjpl derivation succeed every case product line question third basic case two ffj programs derived product line one including first alternative including second alternative feature question difference two programs one hand program elements one feature introduces present hand alternative definitions similar elements like two alternative definitions single class first kind difference already covered second basic case alternative definitions program element second kind difference context enclosing ffj programs ffjpl lead two new branches derivation tree handled separately conjunction premises must hold since corresponding ffj type rule element succeeds ffj programs conjunction ffjpl type rule always holds product line question finally remains show cases combinations mandatory optional alternative features reduced combinations three basic cases proves theorem end divide possible relations features three disjoint sets feature reachable another feature variants feature reachable another feature variants two features mutually exclusive three possible relations construct general case reduced combination three basic cases assume feature mandatory respect set features optional respect set features alternative set features use arrows illustrate three basic cases pairwise relation element list reduced arrow diagram created every feature product line reason three kinds relations orthogonal relations relevant type checking hence general case covers possible relations features combinations features description general case reduction finish proof theorem ffjpl type system complete | 6 |
aug spanning simplicial complexes multigraphs imran ahmed shahid muhmood abstract multigraph nonsimple graph permitted multiple edges edges end nodes introduce concept spanning simplicial complexes multigraphs provides generalization spanning simplicial complexes associated simple graphs give first characterization spanning trees multigraph edges including multiple edges within outside cycle length determine facet ideal spanning simplicial complex primary decomposition euler characteristic topological homotopic invariant classify surfaces finally device formula euler characteristic spanning simplicial complex key words multigraph spanning simplicial complex euler characteristic mathematics subject classification primary secondary introduction let multigraph vertex set spanning tree multigraph subtree contains every vertex represent collection spanning trees multigraph facets spanning simplicial complex exactly edge set possible spanning trees multigraph therefore spanning simplicial complex multigraph defined hfk gives generalization spanning simplicial complex associated simple graph spanning simplicial complex simple connected finite graph firstly introduced anwar raza kashif many authors discussed algebraic combinatorial properties spanning simplicial complexes various classes simple connected finite graphs see instance let simplicial complex dimension denote number simplicial complex euler characteristic given ahmed muhmood topological homotopic invariant classify surfaces see multigraph connected graph edges including multiple edges within outside cycle length aim give algebraic topological characterizations spanning simplicial complex lemma give characterization spanning trees multigraph edges including multiple edges cycle length proposition determine facet ideal spanning simplicial complex primary decomposition theorem give formula euler characteristic spanning simplicial complex basic setup simplicial complex collection subsets satisfying following properties every subset belong including empty set elements called faces dimension face defined written dim number vertices vertices edges dimensional faces respectively whereas dim maximal faces inclusion said facets dimension denoted dim defined dim max dim set facets simplicial complex said pure facets dimension subset said vertex cover intersection every said minimal vertex cover proper subset vertex cover definition let multigraph vertex set spanning tree multigraph subtree contains every vertex definition let multigraph vertex set edgeset let possible spanning trees define simplicial complex facets exactly elements call spanning simplicial complex given hfk spanning simplicial complexes multigraphs definition multigraph connected graph edges including multiple edges within outside cycle length let simplicial complex dimension chain complex given free abelian group rank boundary homomorphism defined course ker groups simplicial icycles simplicial respectively therefore rank rank rank one easily see rank due exact sequence moreover rank rank rank therefore characteristic expressed pdthe euler rank rank rank rank changing index summation last sum using fact rank rank get rank rank rank rank rank thus euler characteristic given rank betti number see topological characterizations let multigraph edges including multiple edges within outside cycle length fix labeling edge set follows eiti multiple edges edge cycle single edges cycle ejtj multiple edges edge outside cycle moreover single edges appeared outside cycle ahmed muhmood give first characterization lemma let multigraph edges including multiple edges cycle length edge set given twiw subset twiw belong ewiw appeared twiw proof cutting method spanning trees obtained removing exactly edges multiple edge addition edge resulting cycle need removed therefore spanning trees form twiw ewiw appeared twiw following result give primary decomposition facet ideal proposition let spanning simplicial complex unir cyclic multigraph edges including multiple edges within outside cycle length xiti xiti xbtb xjtj number multiple edges appeared edge cycle number multiple edges appeared outside cycle proof let facet ideal spanning simplicial complex proposition minimal prime ideals facet ideal correspondence minimal vertex covers simplicial complex therefore order find primary decomposition sufficient find minimal vertex facet ideal covers edge cycle multigraph belong multiple edge therefore clear definition minimal vertex cover minimal vertex cover moreover spanning tree obtained removing exactly edges multiple edge spanning simplicial complexes multigraphs addition edge resulting cycle illustrate result following cases case atleast one multiple edge appeared cycle remove one complete multiple edge one single edge cycle get spanning tree therefore xiti minimal vertex cover spanning simplicial complex intersection spanning trees moreover two single edges removed cycle get spanning tree consequently minimal vertex cover intersection spanning trees case atleast two multiple edges appeared cycle two complete multiple edges removed cycle get spanning tree consequently xiti xbtb minimal vertex cover intersection spanning trees case atleast one multiple edge appeared outside cycle one complete multiple edge outside cycle removed get spanning tree xjtj minimal vertex cover intersection spanning trees completes proof give formula euler characteristic theorem let spanning simplicial complex multigraph edges including multiple edges cycle length dim euler characteristic given number multiple edges appeared within outside cycle respectively proof let edge set multigraph edges including multiple edges cycle length number multiple edges appeared within outside cycle respectively ahmed muhmood dimension one easily see facet twiw see lemma definition number subsets elements containing cycle multiple edges number subsets containing cycle containing multiple edge within cycle subsets containing cycle multiple edges outside cycle containing multiple edge within cycle subsets containing cycle multiple edges outside cycle containing multiple edge within cycle continuing similar manner number subsets containing cycle two edges multiple edge outside cycle containing multiple edge within cycle given choices two edges multiple edge outside cycle therefore obtain number subsets elements containing cycle possible choices two edges multiple edge outside cycle containing multiple edges within cycle use inclusion exclusion principal obtain number subsets containing cycle containing multiple edges number subsets elements containing cycle containing multiple edges within cycle number subsets elements containing cycle multiple edges outside cycle spanning simplicial complexes multigraphs containing multiple edges within cycle number subsets elements containing cycle multiple edges outside cycle containing multiple edges within cycle number subsets elements containing cycle two edges multiple edge cycle containing multiple edges within cycle therefore compute number subsets elements number subsets elements containing cycle containing multiple edges number subsets elements containing multiple edges number subsets elements containing multiple edges number subsets elements containing two edges multiple edge ahmed muhmood figure example let edge set cyclic multigraph edges including multiple edges cycle length shown figure method obtain definition number subsets elements containing cycle multiple edges since subsets containing one element implies subsets containing two elements containing cycle multiple edges know spanning trees facets spanning simplicial complex therefore thus compute euler characteristic using theorem observe dimension substituting values theorem alternatively compute compute betti numbers facet ideal given spanning simplicial complexes multigraphs consider chain complex ker homology groups given therefore betti number given rank rank ker rank compute rank nullity matrix order boundary homomorphism expressed boundary homomorphism written using matlab compute rank nullity rank nullity therefore betti numbers given rank ker rank ker rank rank ker rank alternatively euler characteristic given references anwar raza kashif spanning simplicial complexes graphs algebra colloquium faridi facet ideal simplicial complex manuscripta harary graph theory reading hatcher algebraic topology cambridge university press kashif anwar raza algebraic study spanning simplicial complex graphs ars combinatoria pan spanning simplicial complexes graphs common vertex international electronic journal algebra ahmed muhmood rotman introduction algebraic topology new york villarreal monomial algebras dekker new york zhu shi spanning simplicial complexes graphs common edge international electronic journal algebra comsats institute information technology lahore pakistan address drimranahmed comsats institute information technology lahore pakistan address shahid nankana | 0 |
dec conjecture abed abedelfatah abstract conjectured eisenbud green harris homogeneous ideal containing regular sequence degrees deg homogeneous ideal containing hilbert function paper prove conjecture splits linear factors introduction let polynomial ring field ring graded deg proved graded ideal exists lex ideal hilbert function every hilbert function attained lex ideal let monomial ideal natural ask result clements proved every hilbert function xann attained lex ideal case result obtained earlier katona kruskal another generalizations macaulay theorem found let regular sequence deg deg well known result says hilbert function xann see exercise natural ask happens homogeneous ideal containing regular sequence fixed degrees question bring conjecture denoted egh conjecture egh homogeneous ideal containing regular sequence degrees deg hilbert function ideal containing xann original conjecture see conjecture equivalent conjecture case see proposition egh conjecture known true cases conjecture proven case caviglia maclagan proven egh conjecture true richert says egh conjecture degree holds result published herzog popescu proved field characteristic zero minimally generated generic quadratic forms egh conjecture degree holds cooper done work geometric direction studies egh conjecture cases key words phrases hilbert function egh conjecture regular sequence let regular sequence splits linear factors let since must independent follows map defined graded isomorphism hilbert function preserved map may assume section give background information egh conjecture section study dimension growth ideals containing regular sequence section prove egh conjecture splits linear factors answers question chen asked egh conjecture holds see example background proper ideal called graded homogeneous system homogeneous generators let homogeneous ideal hilbert function sequence dimk dimk simplicity sometimes denote dimension space instead dimk space denote space spanned throughout paper subset denote mon set monomials let mon support polynomial set supp mon monomial called define lex order mon setting lex either deg deg deg deg first index recall definitions lex ideal ideal definition graded ideal called monomial system monomial generators monomial ideal called lex whenever lex monomials degree monomial ideal exists lex ideal xann example ideal ideal lex ideal theorem obtain graded ideal containing xann ideal hilbert function unique macaulay expansion let sqq eisenbud respect set green harris made following conjecture conjecture graded ideal contains regular sequence maximal length conjecture conjecture true ideal contains squares variables follows theorem see following proposition prove equivalence conjecture egh conjecture degree first need following definition definition let monomial ideal monomial vector space called lexsegment generated biggest monomials respect lex order example lex ideal lexsegment lexsegment space monomial ideal lexsegment see proposition proposition let regular sequence degrees following equivalent graded ideal containing graded ideal containing graded ideal containing proof first prove implies let graded ideal containing follows graded ideal containing theorem follows prove implies let graded ideal containing set every let space spanned first monomials lex order let need show ideal let proposition obtain hypothesis obtain implies since lexsegments follows implies graded ideal clearly following lemma helps study egh conjecture component homogeneous ideal lemma let graded ideal containing regular sequence degrees deg following equivalent exists graded ideal containing xann every exists graded ideal containing xann proof clearly implies show implies every exists ideal containing xann theorem may assume ideal let component since dim dim dim follows thus ideal clearly use following lemma regular sequences see chapter lemma let sequence homogeneous polynomials deg regular sequence xann regular sequence following condition holds regular sequence permutation regular sequence dimension growth ideals containing reducible regular sequence let regular sequence set let vector space spanned monomials vector space spanned section prove dim dim also compute dim space generated biggest lex order monomials matrix denote submatrix formed rows columns begin following lemma characterize structure lemma example let sequence homogeneous polynomials aij aij matrix aij regular sequence det proof assume regular prove det induction starting let assume let note regular modulo ideal regular modulo lemma regular sequence regular sequence inductive step obtain det remains show det permutability property regular sequences homogeneous polynomials obtain regular sequence independent assume det prove regular sequence induction starting let inductive step sequence regular regulae sequence remains show regular sequence since det follows map defined isomorphism inductive step regular sequence regular sequence desired special structure regular sequence implies following lemma conjecture lemma let regular sequence homogeneous polynomials aij aij homogeneous polynomial mod deg deg combination monomials proof since deg sufficient prove lemma monomial degree prove induction deg lemma true deg since aii let monomial degree matrix aij inductive step may assume xgi monomial lemma det exist scalars mod follows xgi xgi let xgi note combination monomials degree since obtain mod proof lemma obtain following remark let lemma monomial combination monomials example assume case matrix defined lemma since det regular sequence set let since mod mod mod also see mod remark lemma true arbitrary regular sequence example consider sequence note regular sequence regular sequences regular sequences regular elements respectively regular sequence let easy show mod exist zero equation implies contradiction result lemma obtain following lemma lemma set monomials form proof denote set monomials lemma shows generated let assume since follows polynomial lemma mod since follows contradiction suppose almost assume let monomial minimal degree ring contradiction lemma let lemma monomial every monomials degrees proof let assume since follows thus implies ring contradiction lemma follows belong space hand dim first show assume exist assume may assume combination monomials also obtain combination monomials implies hence obtain desired equality remains show let every let awj let hypothesis obtain conjecture ring implies obtain thus remark part lemma true replace homogeneous polynomials combination monomials example let suppose computation shows case homogeneous polynomial part lemma dimension always bounded degree result following proposition proposition let lemma homogeneous polynomial degree proof prove induction let prove induction starting let assume exists combination monomials mod clearly let mon let ring contradiction particular exists variable two cases case ring let basis ring inductive step obtain since follows since follows therefore contradiction case ring mod since unique combination monomials mod obtain pxi clearly xfi since follows ring xfi contradiction let basis inductive step obtain xfi implies therefore prove main results section theorem let lemma assume monomial degree dim dim proof may assume prove induction dim dim dim dim dim dim dim dim let set lemma inductive step dim dim dim dim dim dim dim dim dim dim dim dim dim dim dim dim dim dim dim proposition let lemma space spanned biggest lex order monomials dim max proof claim prove claim induction let inductive step obtain equal lemma proved claim let therefore conjecture main result section prove egh conjecture true splits linear factors begin following lemma lemma let ideal generated regular sequence deg assume proof first prove let note ideals respectively generated note also regular sequences part lemma obtain regular sequence part lemma obtain prove let assume since follows since regular sequence follows ring implies conversely ideal generated similarly ideal generated lemma follows theorem assume egh conjecture holds graded ideal containing regular sequence degrees deg hilbert function graded ideal containing xann proof check property lemma let need find graded ideal containing xann let ideal generated renaming linear polynomials may assume without loss generality considering short exact sequences see equal let let note set isomorphic hypothesis ideal containing hilbert function let ideal containing claim component ideal proof claim assume part lemma obtain assumption obtain means since ideals follows let part lemma obtain assumption obtain similarly conclude proving claim conjecture let mon zxin mon define ideal generated since xsn xai follows xann claim monomial degree proof claim exists monomial monomial assume zxin assume let max xjn may assume previous claim obtain since deg follows xvr xrn hence proved claim conclude number monomials degree equal since follows particular corollary graded ideal containing regular sequence deg splits linear factors hilbert function graded ideal containing xann since egh conjecture holds obtain following corollary let graded ideal containing regular sequence deg splits linear factors hilbert function graded ideal containing xann egh conjecture equivalent following conjecture conjecture homogeneous ideal containing regular sequence degrees deg hilbert function ideal containing regular sequence degrees deg splits linear factors example let since det follows regular sequence assume example construct ideal hilbert function using hilbert functions computation shows hilbert sequence respectively denote polynomial ring let note ideals see mon also let ideal generated mon mon clear since follows also thus example let since regular sequence follows regular sequence assume computation shows also construct ideal hilbert function using hilbert functions denote ideals respectively let easy calculation shows ideal let see ideal let conjecture also ideal let ideal generated mon mon mon ideal generated computation shows references abedelfatah rings journal algebra aramova herzog hibi gotzmann theorems exterior algebras combinatorics journal algebra caviglia maclagan cases conjecture mathematical research letters chen special cases conjecture clements generalization combinatorial theorem macaulay journal combinatorial theory cooper growth conditions family ideals containing regular sequences journal pure applied algebra cooper conjecture ideals points eisenbud green harris higher castelnuovo theory herzog hibi monomial ideals volume springer verlag herzog popescu hilbert functions generic forms compositio mathematica katona theorem finite sets theory graphs pages kruskal number simplices complex mathematical optimization techniques page macaulay properties enumeration theory modular systems proceedings london mathematical society matsumura commutative ring theory volume cambridge studies advanced mathematics cambridge university press cambridge mermin peeva lexifying ideals mathematical research letters richert study lex plus powers conjecture journal pure applied algebra shakin piecewise lexsegment ideals sbornik mathematics department mathematics university haifa mount carmel haifa israel address abed | 0 |
jul classifying virtually special tubular groups daniel woodhouse abstract group tubular acts tree vertex stabilizers edge stabilizers prove tubular group virtually special acts freely locally finite cat cube complex furthermore prove tubular group acts freely finite dimensional cat cube complex virtually acts freely three dimensional cat cube complex introduction tubular group splits graph groups vertex groups edge groups equivalently fundamental group graph spaces denoted vertex space homeomorphic torus edge space homeomorphic graph spaces tubular space paper tubular groups finitely generated therefore compact tubular spaces tubular groups studied various persectives brady bridson provided tubular groups isoperimetric function dense subset cashen determined two tubular groups wise determined whether tubular group acts freely cat cube complex classified tubular groups cocompactly cubulated author determined criterion finite dimensional cubulations button proven groups also tubular groups act freely finite dimensional cube complexes main theorem paper theorem tubular group acts freely locally finite cat cube complex virtually special haglund wise introduced special cube complexes main consequence group special embeds right angled artin group see full outline wise program structure paper wise obtained free actions tubular groups cat cube complexes first finding equitable sets allow construction immersed walls set immersed walls determines wallspace acts freely wallspaces yields dual cube complex first introduced haglund paulin dual cube complex construction classifying virtually special tubular groups first developed sageev author defined criterion called dilation determines immersed wall produces infinite finite dimensional cubulations precisely immersed walls finite dimensional recall relevant definitions background section section establishes technical result using techniques shown immersed walls replaced primitive immersed walls without losing finite dimensionality local finiteness associated dual cube complex reader encouraged either read section alongside skip first reading finite dimensional case establish set section analyse virtually special decompose conditions imply assumpas tree spaces underlying tree maps tion walls primitive show standard cubulation underlying graph criterion notion fortified immersed wall determines locally finite combining results allow give criterion virtually special section consider tubular group acting freely cat cube complex show obtain action immersed walls preserve important properties precisely prove following proposition let tubular group acting freely cat cube complex tubular space finite set immersed walls associated wallspace moreover acts freely finite dimensional finite dimensional finite dimensional locally finite locally finite proposition sufficient allow prove section exploit results obtained section obtain following demonstrating cubical dimension tubular groups finite dimensional cubulations virtually within cohomological dimension theorem tubular group acting freely finite dimensional cat cube complex finite index subgroup acts freely cat cube complex acknowledgements would like thank dani wise mark hagen classifying virtually special tubular groups background tubular groups cubulations let tubular group associated tubular space underlying graph given edge graph let respectively denote initial terminal vertices let denote vertex edge spaces graph spaces let boundary circles denote attaching maps note denote respectively represent generators let eve eee denote vertex edge spaces universal universal cover let let denote tree assume vertex cover space structure nonpositively curved geodesic metric space attaching maps define locally geodesic curves equitable sets intersection numbers given pair closed curves torus intersection points elements pair homotopy classes closed curves torus geometric intersection number minimal number intersection points realised pair representatives respective classes number realised pair geodesic representatives classes finite set homotopy classes curves viewing elements compute det given identification elements identified homotopy classes curves makes sense consider geometric intersection number equitable set tubular group collection sets finite set distinct geodesic curves disjoint attaching maps adjacent edge spaces generate finite index subgroup note equitable sets also given finite subset generates finite index subgroup satisfies corresponding equality intersection numbers wise formulates equitable sets equivalence follows exchanging elements geodesic closed curves represent corresponding elements equitable set fortified edge exists equitable set primitive every element represents primitive element immersed walls equitable sets immersed walls constructed circles arcs let domain disjoint union circles since classifying virtually special tubular groups exists bijection intersection points curves intersection points curves let corresponding intersection points arc endpoints attached endpoints mapped interior embedded attaching arc pair corresponding intersection points obtain set connected graphs map called immersed walls graph graph groups structure infinite cyclic vertex groups trivial edge groups immersed walls paper immersed walls constructed equitable sets means free use results obtained two sided embedding separating two halfspaces lift horizontal walls vertical walls images lifts obtained lifts curves given inclusion set horizonal vertical walls also gives action gives wallspace main theorem tubular group acts freely cat cube complex exists equitable set set immersed walls fortified obtained fortified equitable set set immersed walls primitive obtained primitive equitable set horizontal walls point regular intersection let eve lines eve eve point lies vertex space intersection point either eve otherwise point eve eve eee infinite cube cat cube complex sequence face dilation function constructed immersed wall immersed wall said dilated infinite image following thm paper wallspace obtained theorem let tubular space finite set immersed walls following equivalent infinite dimensional dual cube complex contains infinite cube dual cube complex one immersed walls dilated following result also obtained combining thm prop prop last part follows last paragraph proof prop classifying virtually special tubular groups wallspace obtained proposition let tubular space infinite dimensional finite set immersed walls contains set pairwise regularly intersecting walls infinite cardinality moreover infinite correspond hyperplanes infinite cube cube contains canonical primitive immersed walls following result uses techniques section compute dilation function let immersed wall let dilation function finite image let quotient map obtained crushing circle vertex note arcs correspond arcs dilation function factors exists function therefore determine dilated computing function orient arc arcs embedded edge space oriented direction orient arcs accordingly define weighting let edge space let arc mapped connecting circles let corresponding elements equitable set edge path oriented arc lemma let tubular space let let set immersed walls obtained equitable set exists set primitive immersed walls obtained equitable set moreover fortified proof decomposes union disjoint circles domain locally geodesic closed paths equitable set arcs suppose primitive let immersed wall containing circle corresponding new equitable set obtained replacing locally geodesic curves disjoint images isotopic remains equitable set since classifying virtually special tubular groups locally geodesic curve new immersed walls obtained replacing reattaching arcs attached intersection points corresponding intersection points let new set immersed walls obtained way note arc corresponds unique arc assume claim new immersed walls also let qij quotient maps obtained crushing circles vertices let vertex corresponding let rij dilation functions let unique maps rij qij let respective weightings arcs assumption finite image arcs correspond arcs map show showing let oriented arc edge qij embeds edge space vertices disjoint endpoints contained correspond circles suppose exactly one endpoint contained terminates vertex corresponding initial vertex corresponds circle domain locally geodesic curve starts vertex corresponding terminal vertex correspond circle domain locally geodesic curve therefore given edge path since number edges exiting vertices number vertices entering procedure produces immersed walls one fewer element equitable set repeating procedure element equitable set produces primitive set immersed walls also clear fortified new immersed walls classifying virtually special tubular groups finite dimensional dual cube complexes wallspace obtained let tubular space let let set immersed walls constructed equitable set vertical immersed wall edge space emphasize section immersed walls assumed even let theorem explicitly stated let finite dimensional immersed walls equivalent let denote vertical wall eee edge refer full background dual cube complex construction choice halfspace finitely many precisely one hyperplane two adjacent joining dual hyperplane corresponding present wherever appears face contained say two disjoint walls vice versa therefore decomproposition map eee poses tree spaces zeve carrier hyperplane corresponding mean union cubes proof vertical wall edge space since vertical walls tree define disjoint identify let define letting map precisely one wall adjacent horizontal wall joining also mapped adjacent vertex joining maps edge joining defined map extends uniquely entire cube complex eve zeee carrier hyperplane corresponding proposition implies decomposes graph spaces vertex spaces edge spaces underlying graph following proposition collects principal consequences finite dimensionality prop classifying virtually special tubular groups proposition let tubular space geodesic attaching maps let wallspace obtained finite set immersed walls finite dimensional horizontal walls dual cube complex partitioned collection subsets partition preserved walls pairwise wall intersecting eve exists gve stabilizing let eve perpendicular eve axis partition horizontal walls satisfying conditions proposition called stable partition wallspace obtained lemma let tubular space finite set immersed walls let stable partition horizontal walls finitely many contain walls intersecting eve wall intersecting eve condition stable proof suppose eve partition exists gve perpendicular deduce also finitely many translates therefore eve contained finitely many elements claim wall follows fact finitely many walls intersecting eve immersed walls therefore finite dimensional proposition exists stable partition horizontal walls eve let pee let pve subpartition containing walls intersecting eee lemma pve pee finite subsubpartition walls intersecting partitions incident vertex pee pve let pve adve gve stablizes criterion stable partition hri eve perpendicular eve action gve preserves partition axis pve ordering walls let denote cubulation vertex integer edge joining consecutive integers therefore element construct free action gve rdve let gve let rdve define map permutes walls pve map bijection necessarily adjacent mapped adjacent map extends isomorphism rdve classifying virtually special tubular groups would stabilize walls pve eve since gve acts freely eve would would imply fixed every imply hence gve acts freely rdve eve every wall also define embedding zeve rdve either vertical contained subpartition pve eve pve therefore entirely determined set eve infinite collection disjoint parallel lines eve zeve exists unique walls disjoint face let note map injective sends adjacent adjacent map extends embedding entire cube complex eve rdve lemma embedding proof let gve implies let edge adjacent either define free action gee reindexing let pee ade pve dee dve let let gee case vertex spaces map extends isomorphism rde vertex spaces embedding zeee let zeee dee exe faces define ists unique let free action gve rdve restricts free action gee claim embed rde rdve way let hee zeee carrier hee identify hyperplane corresponding eee hee note hee embeds subspace zeve restricts embedding hee construct embedding recall pee ade eee pve adve dee dve hrj faces zeee therefore unique zee dee dve thus define require assumption every classifying virtually special tubular groups lemma following commutative square provided immersed walls primitive hee rde zeve rdve moreover inclusion equivalent extending geed action trivial action rdve proof let hee construction verify let gee dee exists dee dve intersection dee eve geodesic line parallel eee eve thus gee stabilizes eve immersed walls primitive deduce gee stabilizes dee dve conclude deduce observe gee acts trivially last dve dee coordinates finite since finitely many let max vertex orbits proposition immersed walls primitive acts freely action factor action tree moreover embedding proof gve rdve rde equivariantly extended actions gve gee act trivially additional factors therefore square lemma extended hee rde eve rdve classifying virtually special tubular groups embedding therefore obtain tree spaces proposition locally finite fortified eve proof fortified exists vertex space eee every horizontal wall pve intersects eve adjacent edge space eve intersects eve eee therefore every horizontal wall intersecting line eve intersects eee pee pve let eei enumeration intersects horizontal walls pve peei pve let zeve verify note every wall pve intersects eee therefore every wall walls intersection finally finitely many true differs precisely one wall adjacent since differ precisely one wall infinite locally finite collection distinct adjacent show converse first observe embedding zeve rdve proves zeve always locally finite irrespective whether immersed walls eve let via edge fortified let adjacent eue zee adjacent one zee except zee may always define however let pve edge adjacent immersed walls fortified exists eve infinite set lines parallel eee eve facing set disjoint walls exists finitely many edges gveee contained either edge zgee zgee zgee zgee zgee finitely many edges incident conclude zeee finitely many edges incident proposition primitive fortified immersed walls virtually special proof proposition free action aut therefore subgroup isom aut aut projection aut vertex group gve embeds mapping invariant conjugation finitely orbits vertices exists finite index subgroup incident classifying virtually special tubular groups generated primitive element gve let finite index subgroup embeds aut edge group generated element primitive adjacent vertex groups proposition embedding permute factors deduce hyperplanes neither hyperplanes indeed also let finite index subgroup underlying graph girth least let edge fortified conclude dve dee zee proper subcomplex zeve primitive direction ghrdve dve dve acts translation eve thus stabilizes deduce therefore embeds let let vertical hyperplane let contained dual edge incident attaching maps embeddings girth least deduce incident one end single intersected therefore projects let set project factor since permute factors invert hyperplanes subdividing assume disjoint set therefore corresponding subdivision conclude horizontal hyperplanes note requirement proposition immersed walls fortified necessary following example demonstrates example let decompose cyclic hnn extension vertex group stable letter thus tubular group let corresponding tubular space single vertex space edge space equitable set geodesic curve representing geodesic curve representing note attaching map intersects curve equitable set precisely therefore obtain pair embedded immersed horizontal walls connecting respective intersection points arc vertical wall also embedded decompose three sets disjoint walls wallspace walls cover walls cover walls cover walls disjoint since immersed walls embedded furthermore classifying virtually special tubular groups walls different sets pairwise intersect therefore conclude locally finite virtually special revisiting equitable sets although wise proved acting freely cat cube complex implied existence equitable set thus system immersed walls section relationship established resulting dual proposition gives relationship required reduce theorem considering cubulations obtained equitable sets section apply following theorem cubical quasiline cat cube complex theorem let virtually suppose acts properly without inversions cat cube complex stabilizes finite dimensional isometrically embedded combinatorial metric subcomplex cubical quasiline moreover stabg subgroup hyperplane theorem allows prove following lemma let tubular group acting freely cat cube complex let vertex group exists subspace eve metric intersection homeomorphic moreover either empty geodesic line hyperplane proof theorem exists subcomplex yev isoqm metrically embeds combinatorial metric yev cubical quasiline flat torus theorem stabilizes flat yev convex subset cat metric yev stabilizers hyperplanes yev subgroups intersection hyperev either empty geodesic line cat metric inherited plane yev subset cat cube complex let hull denote combinatorial convex hull combinatorial convex hull minimal convex subcomplex containing equivalently hull intersection closed halfspaces containing definition let tubular space let nonpositively curved cube complex map amicable immersion isomorphism classifying virtually special tubular groups embeds vertex space eve map xve euclidean metric hyperplane eve either empty set single geodesic line intersection eve eee emdedded transverse hyperplanes edge space eve eee contained hull eve subspace metric induced note euclidean metric lemma let tubular space let nonpositively curved cube complex let isomorphism amicable immersion proof use identify claim proven constructing lemma map tree spaces embed euclidean flat eve eve either empty set single hyperplane intersection eve moreover ensure geodesic line eee inserted transverse hyperplanes edges spaces eee eve adjacent vertex spaces contained hyperplane intersections eve eee contained inside hull lemma let amicable immersion finite dimensional hull eve embeds subcomplex vertex eve hyperplane proof let let hull let denote halfspace containing determined halfspace containing hyperplane hyperplane eve halfspace containing eve therefore fixed intersect eve hull eve let hve intersection let hve denote hyperplanes intersecting xve geodesic line xve let gve isometry stabilizes eve parallel eve infinite family axis eve set disjoint parallel lines eve hyperplanes finite dimensional exists intersect otherwise would infinite set pairwise intersecting hyperplanes would imply cubes arbitrary dimension therefore finitely many hyperplanes intersecting eve exists finite set hyperplanes hve hve gdr gir disjoint set hyperplanes classifying virtually special tubular groups set disjoint geodesic lines eve thus therefore gir given exists unique giyi giyi eve letting properly intersect therefore construct hull map extends eve since adjacent lie opposite sides precisely one hyperplane hull eve therefore extends higher dimensional cubes thus hull lift universal cover amicable immersion let eee edge space adjacent vertex space eve hyperplane let eve parallel eee eve geodesic line parallel eve otherwise intersects eve geodesic line parallel eee eve say intersects eve eee suppose lemma let amicable immersion let edge eee intersects hyperplane intersecting eee moreover arc eee joining eee non parallel proof let geodesic lines therefore intersect single point eee two sided vertex edge spaces transverse intersection therefore contained inside curve also locally two sided xee xee finitely many hyperplanes separate two deduce endpoint compact curve eee points eee thus must also intersect endpoint contained eee lemma let amicable immersion finite dimensional locally finite nonpositively curved cube complex eve adjacent edge space eee hyperplane every vertex space eve parallel eee intersects proof let precisely two vertex orbits one edge orbit let denote set hyperplanes intersecting assume hull let hve denote set hyperplanes intersecting eve vertex precisely one adjacent edges therefore eve hve adjacent edge space adjacent edge spaces lemma must intersect adjacent vertex spaces adjacent edge spaces therefore deduce hyperplane either intersect vertex space intersect every vertex space classifying virtually special tubular groups parallel adjacent edge spaces intersection line contained edge space eve hyperplane hve suppose exists vertex space eve parallel adjacent edge spaces let hve every intersects hyperplane must intersect wall deduce eve see lem furthermore hgev hve hull eve embeds since eve contained inside subcomplex lemma hull eve determined orienting hyperplanes hull eve conclude finitely many hyperplanes intersect towards eve eue vertex space adjacent eve let eee edge space conlet eve another vertex space adjacent eue let eee necting let eve eve gue edge space connecting note eue eee eue eee parallel eue let eue subgeodesic lines eee eee space isometric bounded parallel lines let finitely many hyperplanes intersect eve let isometry stabilizes axis eve eee similarly let isometry gve stabilizes axis geodesic eve geodesic eve eee note free group two generators let contained eve finitely many hyperplanes intersecting eve must exist stabilizes walls similarly since eve deduce finitely many hyperplanes translate eve must exist stabilizes walls intersecting eve eve deduce let lies hyperplanes intersect precisely hyperplanes eve eve hull hull eve intersecting let compact cube complex acts freely hull contradiction since number intersecting eve grows polynomially therefore permit free hull lemma special case following general statement corollary let amicable immersion finite dimensional locally finite nonpositively curved cube complex every vertex space eve adjacent edge space eee hyperplane intersects eve eee parallel classifying virtually special tubular groups subgroup proof every edge let amicable immersion therefore lemma hyperplane parallel eee similarly intersecting following proposition strengthening one direction theorem let maps topological spaces fiber product note natural projections proposition let tubular group acting freely cat cube complex tubular space finite set immersed walls following properties associated wallspace acts freely finite dimensional finite dimensional finite dimensional locally finite locally finite proof let let amicable immersion assume every immersed hyperplane intersects therefore hull compact finitely many immersed hyperplanes let immersed hyperplane obtain horizontal immersed walls considering components fiber product component natural map components image contained edge space ignored let component whose image intersects vertex space show minor adjustment obtain horizontal immersed wall considering components obtain set horizontal walls obtained equitable set using map decompose components preimages vertex space edge spaces intersection hyperplane eve either empty geodesic line intersection vertex space set geodesic curves restricted preimage set geodesic curves lemma hyperplane intersects vertex eee intersect eee arc space xve adjacent edge space thus components intersection endpoints intersect arcs endpoints therefore decomposes circles map local geodesics vertex spaces arcs map edges spaces endpoint classifying virtually special tubular groups let set components intersect vertex spaces let svp set curves map circles vertex space elements svp attaching maps edge spaces locally geodesic curves since sides equal number arcs walls map acts freely eee geodesics must hyperplanes intersecting vertex space least two parallelism classes implies contains curves generating least two cyclic subgroups therefore svp generates finite index subgroup svp almost equitable set images curves svp may disjoint suppose maximal set curves identical image let denote subset either respect cat metric let neighbourhood contains images arcs connected homotopy identity outside homotoped disjoint set geodesic curves transverse disjoint curves svp choosing small enough perform homotopy sets overlapping curves svp become disjoint identity map outside overlapping curves restriction immersed wall denote thus immersed walls obtained equitable set refer note immersed lifts regular intersections way walls wallspace obtained immersed walls let covers adding single vertical wall edge space wall immersed wall exists homotopy corresponding immersed homotopy lifts homotopy unique note wall immersed wall contained corresponding corresponds intersection unique hyperplane image therefore wall corresponds unique hyperplane wall let corresponding note eve let eve either parallel geodesic lines empty intersections therefore pair regularly intersecting walls correspond pair regularly intersecting correspond pair intersecting hyperplanes classifying virtually special tubular groups disjoint corresponding walls pair also disjoint moreover since contained halfspace determines halfspace therefore halfspace hyperplane corresponding infinite dimensional proposito prove suppose tion would exists infinite set pairwise regularly intersecting walls implies infinite set pairwise regularly intersecting therefore infinite set pairwise intersecting hyperplanes would imply infinite dimensional cat cube complex therefore finite dimensional prove first prove following finite dimensional claim locally finite infinite dimensional proof suppose locally finite lemma contains infinite cube containing canonical let set infinite pairwise crossing walls corresponding corresponding set infinite pairwise crossing infinite cube let let corresponding infinite family pairwise crossing hyperplanes suppose subcomplex let denote cubical neighborhood union cubes intersect locally finite compact also compact lem convex let denote cubical neighborhood point determining canonical let let contained cube compact convex also compact convex therefore intersected finitely many exists intersects since intersect must exist hyperplane intersecting separates note dye corresponding dye let corresponding wall separates conclude separates since respectively contained contradicts fact incident dual hyperplane corresponding finite dimensional apply corollary edge group deduce fortified therefore proposition deduce locally finite classifying virtually special tubular groups prove main theorem paper theorem tubular group acts freely locally finite cat cube complex virtually special proof suppose virtually special embeds subgroup finitely generated right angled artin group therefore acts freely universal cover corresponding salvetti complex necessarily locally finite conversely suppose acts freely locally finite cat cube complex let tubular space proposition exists finite set immersed walls dual associated wallspace finite dimensional locally finite lemma assume immersed walls also primitive therefore proposition virtually special virtual cubical dimension lemma let tubular space suppose exists equitable set produces primitive immersed walls exists finite index subgroup vertex group induced splitting natural map injection summand proof note finite index subgroups summand two factors first factor generated image vertex groups second factor generated stable letters graph groups presentation tree proposition since immersed walls let gve fixes primitive acts freely therefore subgroup aut vertex aut finite quotient aut aut aut aut let finite index subgroup contained kernel note let projection onto first factor embeds aut vertex group survives image therefore embedding finite index subgroup vertex summand let vertex group factor free abelian map factor deduce vertex group survives retract therefore vertex group survives summand first homology classifying virtually special tubular groups theorem let tubular group acting freely finite dimensional cat cube complex finite index subgroup acts freely cat cube complex proof let tubular space proposition exists immersed walls lemma assume also primitive let finite index subgroup given lemma let corresponding covering space let summand first homology generated vertex groups lemma inclusion projection map suppose choose pair elements generate claim equitable set construction generates edge group hge adjacent respective inclusions isomorphism maps therefore apu congruent set equalities exist equalities also imply choice arcs equitable sets chosen join circles image elements therefore set embedded immersed walls obtained precisely two immersed walls intersecting vertex space set horizontal immersed walls along vertical wall edge give three dimensional dual cube complex exist vertex groups embed distinct summands let hgu hgv assume since distinct summands disjoint image disjoint image attaching edge space connecting attaching maps representing respectively obtain new graph spaces resulting tubular group induction obtain specified graph spaces given set immersed walls dual cube complex dimension obtain immersed walls deleting arcs map edge space added construct immersed walls obtained still give dual cube complex dimension references brady bridson one gap isoperimetric spectrum geom funct classifying virtually special tubular groups martin bridson haefliger metric spaces curvature volume grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences berlin button tubular free cyclic groups strongest tits alternative caprace michah sageev rank rigidity cat cube complexes geom funct christopher cashen tubular groups groups geom haglund paulin groupes automorphismes espaces courbure epstein birthday schrift volume geom topol pages electronic geom topol coventry haglund daniel wise special cube complexes geom funct michah sageev ends group pairs curved cube complexes proc london math soc daniel wise research announcement structure groups quasiconvex hierarchy electron res announc math daniel wise riches raags artin groups cubical geometry volume cbms regional conference series mathematics published conference board mathematical sciences washington american mathematical society providence daniel wise cubular tubular groups trans amer math wise hruska finiteness properties cubulated groups submitted publication woodhouse classifying finite dimensional cubulations tubular groups submitted publication woodhouse generalized axis theorem cube complexes submitted publication address | 4 |
convolutional classification oct dingding cai chen yanlin qian image classification methods learn subtle details visually similar classes problem becomes significantly challenging details missing due low resolution encouraged recent success convolutional neural network cnn architectures image classification propose novel deep model combines convolutional image convolutional classification single model manner extensive experiments multiple benchmarks demonstrate proposed model consistently performs better conventional convolutional networks classifying object classes images index image classification super resolution convoluational neural networks deep learning problem image classification categorise images according semantic content person plane finegrained image classification divides classes models cars species birds categories flowers breeds dogs categorisation difficult task due small variance visually similar subclasses problem becomes even challenging available images images many details missing compared counterparts since rise convolutional neural network cnn architectures image classification accuracy finegrained image classification dramatically improved many extensions proposed however works assume sufficiently good image quality high resolution typically alexnet low resolution images cnn performance quickly collapses challenge raises problem recover necessary texture details images solution adopt image techniques enrich imagery details particular inspired recent work image deng propose unique deep learning framework combines cnn cnn classification convolutional neural network racnn object categorisation images best knowledge work first learning model object classification main principle simple higher image resolution easier classification research questions computational recover important details required image classification fig owing introduction convolutional superresolution layers proposed deep convolutional model bottom pipelines achieves superior performance low resolution images layers added deep classification architecture end racnn integrates deep residual learning image typical convolutional classification networks alexnet vggnet one hand proposed racnn deeper network architecture network parameters straightforward solution conventional cnn images bicubic interpolation racnn learns refine provide texture details images boost classification performance conduct experiments three benchmarks stanford cars oxford flower dataset results answer aforementioned questions improves classification srbased classification designed supervised learning framework depicted figure illustrating difference racnn conventional cnn elated ork image categorisation recent algorithms discriminating classes animal species plants objects divided two main groups first group methods utilises discriminative visual cues local parts obtained detection segmentation second group methods focuses discovering interclass label dependency via hierarchical structure labels visual attributes significant performance improvement achieved convolutional neural networks cnns requires massive amount high quality training images classification images yet challenging unexplored method proposed peng transforms detailed texture information images via boost accuracy recognizing finegrained objects images however strong assumption requiring images available training limits generalisation ability addtion assumption also occurs wang work chevalier design object classifier respect varying image resolutions adopts ordinary convolutional layers misses considering superresolution specific layers convolutional classification networks contrary owing introduction layers racnn method consistently gain notable performance improvement conventional cnn image classification classification datasets convolutional layers yang grouped existing algorithms four groups prediction models methods image statistical methods methods recently convolutional neural networks adopted image achieving performance first attempt using convolutional neural networks image proposed dong method learns deep mapping resolution patches inspired number additional deconvolution layer added based srcnn avoid general input patches accelerating cnn training testing kim adopt deep recursive layer avoid adding weighting layers need pay price increasing network parameters convolutional deep network proposed learn mapping image residue image speed cnn training deep network convolutional layers designed image superresolution namely convolutional layers verified effectiveness improve quality images work incorporate residual cnn layers image convolutional categorisation network classifying objects alexnet googlenet experiments convolutional layers verified improve classification performance contributions contributions work first attempt utilise specific convolutional layers improve convolutional image classification experimentally verify proposed racnn achieves superior performance images make ordinary cnn performance collapse esolution onvolutional eural etworks given set training images corresponding class goal conventional cnn labels typical model learn mapping function cross entropy loss lce softmax classifer adopted measure performance class estimates ground truth class labels lce log refers index element vectors denotes dimension softmax layer number classes sense cnn solves following minimisation problem gradient descent back propagation min lce categorisation images propose novel convolutional neural network illustrated fig general racnn consists two parts convolutional layers see sec convolutional categorisation layers see sec sec describe training scheme proposed racnn convolutional layers section present convolutional specific layers cnn goal recover texture details images feed following convolutional categorisation layers first investigate conventional cnn superresolution task given training pairs direct images input mapping function servation output target learned minimising mean square loss lms kxx inspired recent residual convolutional network achieve high efficacy design convolutional layers shown left hand side fig similar convolutional layers learn mapping function images residual images object function proposed convolutional layers following min kxx fig pipeline proposed convolutional neural network racnn recognition images convolutional classification layers alexnet adopted illustrative purpose readily replaced cnns googlenet better performance residual learning yields fact since input output images largely similar meaningful learn residue similarities removed obvious detailed imagery information form residual images easier cnns learn direct cnn models utilise three typical stacked layers filters convolutional layers racnn following empirical basic setting layers also illustrated left hand side fig donate size number filters mth layer respectively output last convolutional layer summed input image construct full image fed remaining convolutional classification layers racnn categorisation layers second part racnn convolutional fullyconnected classification layers high quality images layers number cnn frameworks proposed image categorisation paper consider three popular convolutional neural networks alexnet googlenet cnns typically consist number stacks followed several fullyconnected layers fig typical alexnet visualised employed convolutional categorisation layers racnn alexnet baseline cnn image classification imagenet consists convolutional layers layers vggnet made deeper layers alexnet layers advanced alexnet using small convolution filters paper choose layers experiments denoted rest paper googlenet comprises layers much less number parameters alexnet owing smaller amount weights layers googlenet generally generates three outputs various depths input simplicity last output deepest output considered experiments experiments three networks imagenet data data baseline fair comparison identical pretrained cnn models convolutional categorisation layers replacing dimension final layer size object classes network training key difference proposed cnn conventional cnn lies introduction three convolutional layers evidently racnn deeper corresponding cnn due three convolutional layers store knowledge network parameters learning racnn fashion consider two weight initialization strategies convolutional layers racnn standard gaussian weights weights imagenet data fair comparison adopt identical network structure initialisation schemes racnn gaussian initial weights train whole network minimise loss directly training set learning rates weight decays first two layers learning rate weight decay set third convolutional layer learning rates weight decays categorisation layers except last layer uses learning rate weight decay consider alternative initialisation strategy better initial weights convolutional layers end fig image samples removing background stanford cars birds benchmarks three convolutional layers enforcing minimal mean square loss ilsvrc imagenet object detection testing dataset consists images given weights convolutional layers racnn trained minimising loss function categorisation goal direct utilisation output convolutional layers train layers rgb color space channels instead luminance channel ycbcr color space specifically generate images images pixels via firstly images pixels original image size bicubic interpolation sample image patches using sliding window thus obtain thousands pairs image patches consistent setting racnn using guassian initial weights layers trained image patches setting learning rates weight decays first two layers learning rate weight decay third layer finally jointly learn convolutional classification layers learning manner learning rates weight decays classification layers except last layer learning rate weight decay set iii xperiments datasets settings evaluate racnn three datasets stanford cars oxford category flower datasets first one released krause categorisation contains images classes cars class typically level brand model year following standard evaluation protocol split data images training testing another challenging finegrained image dataset aimed subordinate category classification providing comprehensive set benchmarks annotation types domain birds dataset contains images bird species among images training testing oxford category flower dataset consists images commonly appear united kingdom images belong categories category contains images standard evaluation protocol whole dataset divided images training validation testing experiments training validation data merged together train networks images datasets first cropped provided bounding boxes remove background cropped images images size pixels pixels bicubic interpolation fit conventional cnn follows settings sample images benchmarks illustrated fig verify motivation mitigate suffering low visual discrimination due compare racnn multiple methods corresponding cnn model classification alexnet googlenet stagedtraining cnn proposed proposed racnn implemented caffe adopt average accuracy datasets higher value denotes better performance experiments used lenovo desktop one intel cpu one nvidia gpu proposed racnn deeper structure competing networks alexnet vggnet googlenet requires longer training times indicated table table training times racnns competing cnns seconds epoch methods cars birds flowers alexnet racnnalexnet vggnet racnnvggnet googlenet racnngooglenet comparative evaluation fig compare results alexnet alexnet classification images evident racnnalexnet consistently achieves best performance benchmarks precisely alexnet achieves evaluation convolutional layers fig comparison two methods classification average accuracies table evaluation effect convolutional layers recover high resolution details fix convolutional layers except last layer extracted features correspond high resolution images denote proposed racnn weights initialized gaussian pretrained weights convolutional layers methods cars birds flowers alexnet googlenet experiment employ layers alexnet googlenet categorisation layers racnn note different previous experiments freeze categorisation layers setting learning rates weights decays besides last layers baseline cnns racnn data setting treats categorisation layers racnn identical classifier evaluating effect adding convolutional layers racnn initial gaussian weights called gracnn respectively comparative results shown table fig consistently outperform baseline cnns experiments experimental setting except different initial weights convolutional layers results reported test set accuracies table fig show superior gracnn share network structure differ network weights initialisation convolutional layers sense better performance credited knowledge refining lowresolution images weights verifies motivation boost image classification via image noteworthy since feature extraction layers frozen networks specific features performance boost owing recovered details important classifcation layers evaluation varying resolution table iii comparison varying resolution level res level birds dataset res level curacies collected stanford cars birds datasets respectively knowledge transfer varying resolution images alexnet improve classification accuracy stanford cars birds however alexnet relies strong assumption images available training limits usage tasks note method generic transforms knowledge super resolution across datasets indicates method readily applied image classification tasks proposed racnnalexnet significantly beats direct competitor alexnet stanford cars dataset caltechucsd birds dataset settings training samples performance gap explained novel network structure racnn alexnet evaluate proposed racnn method respect varying resolutions birds dataset images first input image size training models better performance racnnalexnet conventional alexnet achieved image classification shown table iii observe method performs much better lower resolution images relatively high resolution images details increases accuracy pixel images less improvement resolution images reason layers racnn play significant role introducing texture details especially missing visual cues object classification lower quality images demonstrates observation motivation alexnet googlenet fig training process alexnet vggnet googlenet birds dataset weights convolutional layers imagenet images racnn applied varying resolution levels improvement classification performance shows generalisation weights varying resolution levels demonstrates generalisation ability racnn weights onclusion propose verify simple yet effective resolutionaware convolutional neural network racnn image classification images results extensive experiments indicate introduction convolutional layers conventional cnns indeed recover fine details images clearly boost performance classification result explained fact layers learn recover high resolution details important classification trained manner together classification layers concept paper generic existing convolutional superresolution classification networks readily combined cope image classification eferences krause stark deng object representations categorization international conference computer vision workshops wah branson welinder perona belongie caltechucsd dataset nilsback zisserman automated flower classification large number classes indian conference computer vision graphics image processing ieee khosla jayadevaprakash yao novel dataset finegrained image categorization stanford dogs krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advance neural information processing systems zhang donahue girshick darrell category detection european conference computer vision lin roychowdhury maji bilinear cnn models finegrained visual recognition ieee international conference computer vision krause jin yang recognition without part annotations ieee conference computer vision pattern recognition chen zhang learning classify categories privileged misalignment ieee transactions big data akata reed walter lee schiele evaluation output embeddings image classification ieee conference computer vision pattern recognition branson horn belongie perona bird species categorization using pose normalized deep convolutional nets british machine vision conference chevalier thome cord fournier henaff dusch classification varying resolution ieee international conference image processing liu qian chen huttunen fan saarinen incremental convolutional neural network training international conference pattern recognition workshop deep learning pattern recognition zeyde elad protter single image using sparserepresentations international conference curves surfaces yang wright huang image via sparse representation ieee transactions image processing chang yeung xiong neighbor embedding ieee conference computer vision pattern recognition glasner bagon irani single image ieee international conference computer vision dong loy tang image using deep convolutional networks ieee transactions pattern analysis machine intelligence dai wang chen van gool image helpful vision tasks ieee winter conference applications computer vision kim kwon lee lee accurate image using deep convolutional networks ieee conference computer vision pattern recognition simonyan zisserman deep convolutional networks largescale image recognition keys cubic convolution interpolation digital image processing ieee transactions acoustics speech signal processing angelova zhu efficient object detection segmentation recognition ieee conference computer vision pattern recognition stark krause pepik meger little schiele koller categorization scene understanding international journal robotics research maji rahtu kannala blaschko vedaldi visual classification aircraft arxiv preprint zhang farrell iandola darrell deformable part descriptors recognition attribute prediction international conference computer vision chai lempitsky zisserman symbiotic segmentation part localization categorization international conference computer vision gavves fernando snoek smeulders tuytelaars local alignments categorization international journal computer vision shotton johnson cipolla semantic texton forests image categorization segmentation ieee conference computer vision pattern recognition hwang grauman sha semantic kernel forests multiple taxonomies advances neural information processing systems mittal blaschko zisserman torr taxonomic multiclass prediction person layout using efficient structured ranking european conference computer vision deng ding jia frome murphy bengio neven adam object classification using label relation graphs european conference computer vision zhang paluri ranzato darrell bourdev panda pose aligned networks deep attribute modeling ieee conference computer vision pattern recognition hospedales xiang gong transductive multiview embedding recognition annotation european conference computer vision peng hoffman stella saenko knowledge transfer image classification ieee international conference image processing wang chang yang liu huang studying low resolution recognition using deep networks proceedings ieee conference computer vision pattern recognition yang yang benchmark european conference computer vision irani peleg improving resolution image registration computer vision graphics image processing graphical models image processing fattal image upsampling via imposed edge statistics acm transactions graphics vol huang mumford statistics natural images models ieee conference computer vision pattern recognition huang singh ahuja single image transformed ieee conference computer vision pattern recognition yang lin cohen fast image based inplace example regression ieee conference computer vision pattern recognition freedman fattal image video upscaling local selfexamples acm transactions graphics dai timofte van gool jointly optimized regressors image computer graphics forum vol schulter leistner bischof fast accurate image upscaling forests ieee conference computer vision pattern recognition kim kwon lee lee convolutional network image ieee conference computer vision pattern recognition dong loy tang accelerating convolutional neural network european conference computer vision szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions proceedings ieee conference computer vision pattern recognition zhang ren sun deep residual learning image recognition ieee conference computer vision pattern recognition deng dong socher imagenet hierarchical image database ieee conference computer vision pattern recognition russakovsky deng krause satheesh huang karpathy khosla bernstein berg feifei imagenet large scale visual recognition challenge international journal computer vision jia shelhamer donahue karayev long girshick guadarrama darrell caffe convolutional architecture fast feature embedding acm international conference multimedia | 1 |
preprint institute statistics rwth aachen university asymptotics covariance matrices quadratic forms applications trace functional shrinkage nov ansgar rainer von sachs institute statistics rwth aachen university aachen germany steland institut statistique biostatistique sciences actuarielles isba catholique louvain voie roman pays belgium july establish large sample approximations arbitray number bilinear forms sample matrix vector time series using weighting vectors estimation asymptotic covariance structure also discussed results hold true without constraint dimension number forms sample size ratios concrete potential applications widespread cover highdimensional data science problems projections onto sparse principal components general spanning sets frequently considered classification dictionary learning two specific applications results study greater detail asymptotics trace functional shrinkage estimation covariance matrices shrinkage estimation turns asymptotics differs weighting vectors bounded away orthogonaliy nearly orthogonal ones sense inner product converges ams subject classifications primary secondary keywords brownian motion linear process long memory strong approximation quadratic form trace introduction large number procedures studied analyze vector time series dimension depending sample size relies projections projecting observed random vector onto spanning set lower dimensional subspace dimension examples include sparse principal component analysis see order reduce dimensionality data sparse portfolio replication index tracking studied dictionary learning see one aims representing input data sparse linear combination elements dictionary frequently obtained union several bases historical data studying projections natural study associated bilinear form representing dependence structure terms projections covariances uncentered sample matrix throughout paper order conduct inference large sample distributional approximations needed vector time series model given correlated linear processes established strong steland von sachs approximation brownian motion single quadratic form provided weighting vectors uniformly bounded turned result require condition ratio dimension sample size contrary many asymptotic results highdimensional statistics probability present article study general case increasing number quadratic forms arising projecting onto sequence subspaces whose dimension converges noting analysis autocovariances stationary linear time series appears special case approach recent results related work established central limit theorem finite number autocovariances whereas case long memory series studied studied asymptotic theory detecting change mean vector time series growing dimension treat case increasing number bilinear forms consider two related different frameworks first framework uses sequence euclidean spaces rdn equipped usual euclidean norm second framework embeds spaces sequence space equipped shown frameworks increasing number say quadratic forms approximated brownian motions without constraints apart one main results asserts assumed time series models one define new probability space equivalent versions gaussian process taking values rln sup nln almost surely without constraints believe results many applications diverse areas indicated paper study detail two direct applications first application considers trace operator equals trace matrix norm ktr applied covariance matrices show trace sample covariance matrix appropriately centered approximated brownian motion new probability space also establishes convergence rate ktr ktr second application elaborated paper shrinkage estimation covariance matrix studied depth sequences random vectors well dependent vector time series see amongst others order regularize sample matrix shrinkage estimator considers convex combination target usually corresponds simple regular model consider identity target multiple identity matrix dimension best knowledge large sample approximations estimators yet studied show uniformly shrinkage weight convex combination bilinear form given shrinkage estimator approximated gaussian process centered shrunken true covariance matrix using shrinkage weight uniformity result also holds widely used estimator optimal shrinkage weight estimated optimal weight convergence rate quite general conditions known turns comparing matrices terms natural pseudodistance induced bilinear forms convergence rate carries optimal weight inference trace estimator also compare shrinkage estimator using estimated optimal weight oracle estimator using unknown optimal weight last study case nearly asymptotically orthogonal vectors consequence bound see property allows place much unit vectors unit sphere turns nearly orthogonal vectors nonparametric part dominates large samples contrary situation vectors bounded away orthogonality time series model paper follows time observe dimensional mean zero vector time series yni yni yni defined common probability space whose coordinates causal linear processes yni cnj independent mean zero error terms possibly identically distributed converges coefficients cnj may depend therefore also allowed depend dimension impose following growth condition assumption coefficients cnj linear processes satisfy sup max well known assumption covers common classes weakly dependent time series arma well wide range long memory processes refer discussion define centered bilinear form rdn yni yni class proper sequences weighting vectors wndn studied throughout paper set sequences rdn uniformly bounded sense sup kwn sup steland von sachs vectors naturally arise various applications sparse principal component analysis see sparse financial portfolio selection studied detailed discussion refer worth mentioning results easily carry weighting vectors uniformly bounded provided one relies standardized versions bilinear form first notice conditions allow control linear process coefficients projected time series yni therefore decay rate original time series assumption sup leads estimate cnj bounded dimension yields estimate latter expression growing dimension hold general assuming however reasonable setting since cnj example cnj latter assumption would rule case observing autoregressive time series order autoregressive parameters bounded away zero hand wmin cnj min cnj wmin min lag yni instead yni fixed considering cnj next observe jensen inequality yni linear time hence imply series coefficients decaying rate original time series clearly sequences weighting vectors uniformly bounded scaling property uniformly bounded one standardizes factor cancels hence sense several theoretical results also applied study projection onto vectors uniformly bounded rest paper organized follows section introduce partial sums partial sum processes associated increasing number bilinear forms establish strong weak approximation theorems bilinear forms application trace functional discussed section large sample approximations shrinkage estimators covariance matrices studied depth section inference trace large sample approximations bilinear forms definitions review let define partial sums eyi put dnk two sequences weighting vectors associated processes denoted especially yni yni sequence standard brownian motions constant introduce rescaled version called following result asymptotics single bilinear form uniformly bounded shown theorem suppose yni vector time series according model satisfies assumption let weighting vectors uniformly bounded sense exists equivalent versions dnk denoted dnk standard brownian motion depends defined probability space constant defined implies strong approximation sup well clt asymptotically steland von sachs multivariate version bilinear forms approximates brownian motion shown result allows consider dependence structure arises mapping ynn onto subspace span spanned weighting vectors canonical mapping called projection onto sequel represents orthogonal projection onto orthonormal associated matrix cov cov eigenvectors cov diagonal matrix property lost general spanning vectors given sample ynn random vectors canonical nonparametric statistical estimators cov defined cov consist bilinear forms studied theorem fixed entries cov multivariate extension suffices study dependence structure projection onto longer holds allowed grow sample size increases studying case indeed treatment situation much involved shall see requires different scaling involved mathematical framework strong approximations establish paper take place euclidean space rln growing dimension hilbert space respectively thus beyond case finite number bilinear forms consider dnj pairs uniformly sequences weighting vectors may tend infinity interested joint asymptotics centered scaled versions corresponding statistics given associated sequential processes dnj inference trace additional factor anticipates right scaling obtain large sample approximation interested studying weighted averages averaging takes place forms sample sizes let weight sample size weight quadratic form associated pair sequences weighting vectors define nmk nmk yni ymi nmk notice relations nnk depends weights measurable yni sample size may consider associated process associated preliminaries proceeding recall following facts hilbert space strong approximations hilbert spaces shall denote inner product arbitray hilbert space induced norm operator operator kop results take place hilbert space sequences separable hilbert space equipped inner product induced norm associated operator norm operator simply denoted two random variables defined denote inner product sufficient conditions strong approximation partial sums dependent random elements taking values separable hilbert space require control associated conditional covariance operator denote underlying probability space let random element defined taking values covariance operator associated defined steland von sachs may associate conditional covariance operator given covariance operators symmetric positive linear operators operator norm kcx sup properties discussion see strong invariance principle deals approximation partial sums random elements brownian motion recall random element values called brownian motion increments independent iii increment gaussian mean covariance operator min nonnegative linear operator kei orthonormal system random element brownian motion generated definition general separable hilbert space analogous strong invariance principle strong approximation sequence random elements taking values arbitrary separable hilbert space inner product induced norm asserts redefined rich enough probability space exists brownian motion values covariance operator constants dimension finite log log infinite dimensional throughout paper write two arrays real numbers exists constant large sample approximations aim showing strong approximation processes dnj inference trace coordinate processes dnj given dnj nln dnk processes expressed partial sums lemma representation dnk leading nln random elements defined introduce conditional covariance operators associated denote filtration define let also introduce unconditional covariance operator znj random variables znj satisfying znj znj znk quantities introduced asymptotic covariance parameters bilinear forms corresponding pairs following technical crucial result establishes convergence operator expectation provides convergence rate theorem suppose uniformly bounded sup max max kvn kwn constant let steland von sachs defined define denotes operator norm defined position formulate first main result large sample approximations bilinear forms converges infinity terms well results holds true weak assumption weighting vectors uniformly bounded norm theorem let yni vector time series following model sat isfying assumption suppose uniformly bounded sup max max kvn kwn constant processes redefined rich enough probability space exists brownian motion dimension coordinates covariance function given min following assertions hold true euclidean space rln strong approximation dnt krln rln constants depends provided following assertions hold respect sup dnj inference trace iii respect dnj sup respect maximum norm sup max dnj let exist constants equivalent versions standard brownian motion defined new probability space sample size sup remark brownian motions constructed sup holds theorem due assertion theorem may conjecture holds discussion neither proof counterexample following result studies relevant processes space yields approximation probability taking account additional factor log log theorem suppose assumptions theorem hold hilbert space strong approximation dnt log log exists sequence log log max steland von sachs words max equivalently sup result eliminates condition detailed information sequence question arises whether results limited linear processes main arguments deal approximating martingales following result suggests class vector time series main results paper apply larger theorem let projection vectors uniformly bounded sup max max kvn kwn constant let yni vector time series dnk approximated martingales defined rate certain sequences coefficients cnj satisfying assumption sequence independent mean zero random variables sup sup max max ynk ynk results section still hold true proofs proof lemma argue proof theorem given shown partial sum associated single bilinear form attains representation dnk gaussian random variables yni yni yni yni linear processes yni cnj yni cnj inference trace coefficients cnj cnj cnj cnj pairs weighting vectors consider corresponding partial sum process summands vectors however also interpret random elements taking values completes proof asserts respectively hold following conditions scaled partial sums satisfied iii exists covariance operator conditional covariance operators converge operator expectation rate discussion result extensions see shown strong invariance principle also holds true strictly stationary sequences taking values separable hilbert space possess finite moment order strong mixing mixing coefficients satisfying conditions however convenient studying linear processes studied strong invariance principles univariate nonlinear time series using physical dependence measure easy verify linear processes extensions time series fixed dimension provided rely conditions since allow study time series growing dimension taking values space relatively straightforward way preparation proof theorem need following lemma dealing uniform convergence unconditional conditional covariances approximating martingales defined brevity fel fel steland von sachs lemma assumption sup sup implies sup sup sup sup sup implies sup sup sup proof direct calculation leads let first estimate sup sup see next show sup sup inference trace recall assume follows schwarz inequality yields using jensen inequality obtain sup sup upper bound depend hence sup sup sup using kvn kvn uniformly follows lastly consider mentary fact since indices satisfy whereas independent hence clearly summands vanish steland von sachs otherwise put estimate hence follows arguments also imply sup sup since first term finite since second one uniformly sup sup turn implies verify one first conditions argues simi larly order estimate ecm observe max sup sup using jensen inequality verifies turn introduce coordinate partial sums denote appropriately scaled versions corresponding martingale approximations given inference trace need study approximation error next result improves upon lemma showing firstly error order terms conditioning past secondly result uniform weighting vectors lemma sup proof consider decomposition fel projection onto subspace spanned therefore hence fel fel fel fel fel fel sup sup due sup fel projection onto subspace spanned thus independent steland von sachs last fatou lim sup lim sup lim uniformly sup lim estimated hence sup fel sup sup virtue completes proof proof theorem sequence conditional covariance operators xnj say convergence operator defined supf operator acting unconditional covariance operator expectation sup sup xnj xnk converges define random elements recall let inference trace conditional covariance operator associated martingale approximations obviously sup sup sup ekc shall estimate terms separately simplify notation let estimate shall show uniformly application inequality sup sup sup sup sup sup sup since independent decomposition leads virtue lemma steland von sachs lemma uniformly consequently sup sup hence using inequality obtain sup sup sup sup sup sup lemma see scaling martingale approximations factor sup max therefore sup ekc sup sup sup sup sup sup sup proof theorem virtue lemma equation representations dnt nln therefore check conditions iii discussed summands seen attaining values euclidean space rln finite inference trace increasing dimension random elements taking values infinite dimensional hilbert space show observe ynk ynk ynk repeating arguments obtain sup sup sup uniformly kvn due assumption imply turn noting bounds hold uniformly obtain sup sup max virtue jensen inequality may conclude establishes introduce partial sums condition shown follows denote coordinates denote corresponding notice given martingale approximations respectively let remainder coordinates preparations clearly martingale property implies lemma asserts sup sup steland von sachs two applications jensen inequality lead sup sup sup sup sup shows condition iii follows theorem consequently may conclude may redefine processes rich enough probability space brownian motion covariance operator covariances exists constants krln therefore sup kdn krln implies assertions show iii recall vector rln bounded krln kdn using sup sup kdn remains prove may argue obtain nmk yni ymi yni ymi yni ymi yni ymi yni cnj ymi cmj inference trace therefore obtain representation xxx yni ymi yni ymi linear processes cnj coefficients cnj hence result follows proof remark ability space theorem may assume standard brownian motions virtue orem since dnj sup sup dnj sup verifies remark proof theorem observe conditions iii theorem hold hilbert space well since rdn euclidean vector norm coincides therefore obtain strong approximation log log sequences put log log let given may find hence may conclude steland von sachs max verifies asymptotics trace norm trace plays important role multivariate analysis also arises studying shrinkage estimation providing large sample approximation brownian motion shall briefly review relation several matrix norms trace related matrix norms various matrix norms used measure size covariance matrices shall use trace norm defined eigenvalues matrix kaktr also notice trace norm linear mapping subspace definite matrices satisfies kaktr covariance matrix induces frobenius norm via worth mentioning trace norm also related frobenius norm via fact kaktr way results formulated terms scaled trace norms interpreted terms scaled squared frobenius norms square roots third interesting direct link another family norms namely norms kaks matrix rank defined singular values eigenvalues kakps norm also called nuclear norm since identity ktr kyn trace norm sample covariance matrix norm scaled data matrix sequence matrices growing dimension makes sense attach scalar weight depending dimension given norm simple matrices inference trace identity matrix receive bounded norms mind squared frobenius norm trace natural attach scalar weight trace operator leading scalar weight frobenius norm proposed one may select simple benchmark matrix identity matrix since idn choose therefore define scaled trace operator square matrix aij dimension scaled trace operator induces scaled trace norm kaktr square matrix given covariance matrix averages modulus eigenvalues scaled frobenius matrix norm given kakf trace asymptotics let turn trace asymptotics dimension fixed well known eigenvalues sample covariance matrix thus sum well convergence rate asymptotically normal see case situation involved sample covariance matrix consistent frobenius norm even presence dimension reducing factor model see remark following result provides large sample normal approximation scaled trace arbitrarily growing dimension properly normalized result also norm shows trace norm convergence rate ktr ktr introduce yni yni notice interested studying scaled trace norm process steland von sachs theorem let yni vector time series following model satisfying assumption holds construction theorem sup denotes brownian motion arising theorem choosing pairs denotes jth unit vector satisfies properties iii theorem suppose addition assumptions theorem ynn strictly stationary since weighting vectors used theorem first unit vectors covariance associated asymptotic covariance dni dnj given asymptotic representations cov cov ynk ynk ynk therefore negligible terms may express variance parameter cov lag series ynk ynk estimated ynk associated estimator given sequence lag truncation constants wmh sequence window weights typically defined kernel function bartlett kernel via bandwidth parameter inference trace theorem var cov using canonical estimator btr asymptotic confidence interval nominal coverage probability given lemma assume constant suppose cnj satisfy decay condition sup lim remark worth comparing result following result obtained factor model suppose generic random vector ydn satisfies factor model observable factors errors factor loading matrix sample covariance matrix sample convergence rate ekyn maxi maxi bounded see theorem means compared rate fixed dimension frobenius norm inflated factor steland von sachs proofs proof theorem clearly thus well fact aei leads ktr ktr let dnj dnj shall apply theorem therefore redefining processes new probability space together brownian motion covariances described theorem may argue follows since dnj conclude process satisfies dnj dnj theorem proof lemma proof follows easily theorem noting covariances coordinates brownian motion given shrinkage estimation shrinkage approach regularize sample matrix shall review section results obtained settings shrinking towards identity matrix terms convex combination sample matrix optimal weight depends trace true covariance matrix estimated canonically trace sample covariance matrix consequence apply results obtained previous section obtain large sample approximations shrinkage matrix estimators recall approximations deal norm difference partial sums inference trace brownian motion attaining values vector space order compare covariance matrices shall work following pseudometric define sequences matrices dimension indeed fixed mapping symmetric semidefinite implies satisfies triangle inequality hence defines pseudometric space matrices establish three main results regular weighting vectors bounded away orthogonality establish large sample approximation holds uniformly shrinkage weight therefore also using common estimator optimal weight compare shrinkage estimator using estimated optimal weight oracle estimator using unknown optimal weight cases turns convergence rate estimated optimal shrinkage weight carries shrinkage covariance estimator lastly study case orthogonal nearly orthogonal vectors latter case particular interest since one may place unit vectors unit sphere corresponding overcomplete bases studied areas dictionary learning shrinkage covariance matrix estimators results previous chapters show general conditions inference relying inner products series based sample covariance singular however statistical point view matrix even use classical estimator recommended situations high dimensionality important criteria error condition number defined ratio largest smallest eigenvalue deteriorate advisable regularise order improve performance asymptotically finite sample sizes respect criteria obviously particular interest lies based approaches using invertible estimator approach shrinkage estimation multivariate hidden markov models one without needing impose structural assumptions possibility regularise particular avoiding sparsity following approach shrinkage consider shrinkage estimator defined linear convex combination target matrix shrinkage weights convex combination chosen optimal way minimise error see role target similar ridge regression reduce potentially large condition number highdimensional matrix adding highly regular well conditioned matrix popular choice target take multiple identity matrix order respect scale matrices convex combination choice target reduces dispersion eigenvalues around steland von sachs grand mean large eigenvalues pulled towards small eigenvalues lifted particular lifted away zero although bias gain variance reduction parintroduced estimating compared ticular helps considerably reduce error estimating order develop correct asymptotic framework behaviour large covariance matrices authors propose use scaled frobenius norm given measure distance two matrices asymptotically growing dimension used also particular define error become expected normalised frobenius loss furthermore scaling appropriate choice factor front identity matrix definition target equation practice needs estimated trace similarly theoretical shrinkage weight need replaced sample thus fully expression shrinkage estimator writes analog follows shrinks sample covariance matrix towards estimated shrinkage target remains optimally choose shrinkage weights analogue purpose balancing good fit good regularisation prominent possibility indeed choose shrinkage weights error mse minimised argminwn leads shrunken matrix closed form solution proposition derived choice leads interesting property showing actual relative gain shrunken estimator compared classical unshrunken sample covariance terms error moreover shown property continues hold even one replaces practice yet unknown optimal constructed replacing population quantities weights estimator inference trace numerator denominator sample analogs whereas denominator slightly less straightforward estimate nuessentially estimated one possibility suggested developed merator based estimation variance note var ndn stationarity ynk ynk ynk ynk var cov ynk ynk optimal weights obtained follows let consistent estimator ynt ynt ynt ynt similar previous section variances estimated consistency general version shown equation similar assumptions stated lemma led estimator ndn also studied depth rate consistency asymptotic framework growing dimensionality achieved following specific shrinkage target also considered let larger faster allowed grow recalling observe measures closeness target true covariance matrix theorem theorem show order apply results previous sections onto fully shrinkage one needs study convergence estimated shrinkage weight estimator normalised become clear proof theorem stated already observe implies thus close dimension may even grow faster steland von sachs asymptotics regular projections interest deriving asymptotics bilinear forms based shrinkage estimator covariance matrix assume uniformly weighting vectors turns due shrinkage target inner product angle vectors appears approximating brownian functional inner product bounded may converge tends latter case requires special treatment studied separately shall call pair projections regular uniformly bounded satisfies constant addition bounded away orthogonality let arbitrary shrinkage weight consider associated shrinkage estimator estimates unobservable shrunken variance matrix notice define nvn shall apply trace asymptotics obtained theorem variance approximating linear functional brownian motion given cov parameters see since typically parameters positive limits natural assume inf theorem let regular pair projections assumptions theorem condition exists new probability space carries equivalent version vector time series brownian motion sup inference trace covariance structure given var cov cov especially deterministic random sequence shrinkage weights large sample approximation corresponding shrinkage estimator notice var hence assumption variance approximating wiener process adressing nonparametric part shrinkage estimator order whereas variance term approximating target order due fact need brownian motion coordinates used approximate estimated target requires scale coordinates theorem following theorem resolves issue approximating shrinkage estimator two brownian motions one dimension nonparametric part one dimension target brownian motions constructed separately priori nothing said exact covariance structure turns however covariances converge properly shall see alternative construction terms resulting decomposition order theorem let regular pair projections suppose underlying probability space rich enough carry addition vector time series yni uniform random variable exist univariate brownian motion mean zero cov min steland von sachs mean zero brownian motion dimension covariance function cov min pdn max observe var three terms result shows nonparametric part namely sample covariance matrix well shrinkage target contribute asymptotics sense shrinking respect chosen scaled norms provides large sample approximation mimics finite sample situation comparisons oracle estimators recall oracle estimator estimator depends quantities unknown optimal shrinkage weight course interest study distance estimated optimal weight associated shrinkage estimator oracle using particular question arises rate convergence affects difference fully data adaptive estimator oracle uses estimated next theorem compares shrinkage estimator oracle estimator optimal shrinkage weight shrinks sample covariance matrix towards target using optimal shrinkage weight terms pseudometric thus considers quantity following result shows even rate convergence equal rate convergence estimator inference trace theorem assumptions theorem construction described new probability space next result investigates difference shrinkage estimator oracle type estimator using oracle shrinkage weight assuming knowledge terms pseudodistance theorem assumptions theorem construction described new probability space result remarkable shows optimal sense inherits rate convergence estimator optimal shrinkage weight nearly orthogonal projections let unit vectors rdn may project order determine best approximating direction recall true covariance two projections cov corresponding shrinkage estimator cov clearly covariances vanish chosen eigenvectors classical principal component analysis pca applied shrinkage covariance matrix estimator analyzing data common rely procedures sparse pca see yield sparse principal components analyzing covariances projections interest steland von sachs oln orthogonal system oln spans subspace rdn course orthogonal vectors larger however one relaxes orthogonality condition one place much unit vectors euclidean space rdn way pairwise angles small indeed provides elegant proof following kabatjanskiilevenstein bound theorem cheap version bound tao let unit vectors rdn adn universal constant theorem motivates study case nearly orthogonal weighting vectors defined pair satisfying asymptotics shrinkage estimator follows theorem let unit vectors satisfying nearly orthogonal condition suppose conditions theorem hold pdn observe asymptotically orthogonal weighting vectors term corresponding parametric shrinkage target thus vanishes asymptotically situation nonparametric part dominates large samples proofs proof theorem first notice ensures second term converge probability since regular projection condition pdn ensures gaussian random variable since inf inf hence iff excluded argue similarly proof theorem put dnj dnj since weighting vectors uniformly theorem yields new probability space process equivalent defined denoted inference trace existence brownian motion characterized theorem ktr fact kvn kwn using results ktr ktr shows proof theorem theorem exist new probability space equivalent process yni brownian motion sup billingsley lemma section defined lemma exist brownian motions original probability space sup indeed recall infinite product complete separabe metric space complete separable case equipped usual metric see induced skorohod metric making separable complete steland von sachs apply sec lemma nvn conclude existence function holds convergence supnorm follows continuity theorem exist new probability space equivalent vector time series yni brownian motions dimension characterized theorem application billingsley lemma shows existence brownian motions original probability space priori information exact second order structure two brownian motions close associated process dnj corresponding martingale approximation defined allows study convergence covariances cov first observe max see lemma lemma since satisfies pdn dnj see theorem also notice uniformly use decomposition conclude cov last two terms uniformly max sup combining estimates lemma yields max cov inference trace establishes since covariances approximating martingales equal factor due additional scaling dnj approximate bilinear forms theorem proof theorem recall since elements uniformly bounded kvn kwn turn implies put notice using obtain bound observe equal difference replacing using therefore obtain associated bilinear form steland von sachs proof theorem recall using nvn arrive completes proof proof theorem theorem obtain let defined arguing proof first summand theorem assumption second term bounded completes proof appendix notation formulas denote approximating martingales used obtain strong approximations require control following quantities reader convenience reproduce well related formulas results let inference trace fel fel lemma definition suppose uniformly bounded sense equation assumption implies sup fel fel sup sup fel exist uniformly bounded exist acknowledgments part work supported grant first author deutsche forschungsgemeinschaft dfg grant ste gratefully acknowledges rainer von sachs gratefully acknowledges funding contract projet actions recherche belgique iap research network grant belgian government belgian science policy steland von sachs references andrew barron albert cohen wolfgang dahmen ronald devore approximation learning greedy algorithms ann patrick billingsley convergence probability measures wiley series probability statistics probability statistics john wiley sons new york second edition publication bosq linear processes function spaces volume lecture notes statistics new york theory applications joshua brodie ingrid daubechies christine mol domenico giannone ignace loris sparse stable markowitz portfolios proceedings national academy sciences united states america herold dehling walter philipp almost sure invariance principles weakly dependent random variables ann jianqing fan yingying fan jinchi high dimensional covariance matrix estimation using factor model econometrics mark fiecas franke rainer von sachs joseph tadjuidje shrinkage estimation multivariate hidden markov models amer statist appear moritz jirak analysis increasing dimension multivariate kollo heinz neudecker asymptotics eigenvalues eigenvectors sample variance correlation matrices multivariate kollo heinz neudecker corrigendum asymptotics eigenvalues unitlength eigenvectors sample variance correlation matrices multivariate anal multivariate michael kouritzin strong approximation linear variables dependence stochastic process oliver ledoit michael wolf improved estimation covariance matrix stock returns application portfolio selection journal empirical finance olivier ledoit michael wolf estimator covariance matrices multivariate weidong liu zhengyan lin strong approximation class stationary processes stochastic process walter philipp note almost sure approximation weakly dependent random variables monatsh alessio sancetta sample covariance shrinkage high dimensional dependent data multivariate steland von sachs approximations matrices time series bernoulli press terence tao cheap version bound almost orthogonal vectors daniela witten robert tibshirani testing significance features lassoed principal components ann appl inference trace daniela witten robert tibshirani trevor hastie penalized decomposition applications sparse principal components canonical correlation analysis biostatistics wei biao strong invariance principles dependent random variables ann wei biao yinxiao huang wei zheng covariances estimation processes adv appl wei biao wanli min linear processes dependent innovations stochastic process zhang strong approximations martingale vectors applications adaptive designs acta math appl sin engl | 10 |
genetic algorithm solving simple mathematical equality problem denny hermawanto indonesian institute sciences lipi indonesia mail abstract paper explains genetic algorithm novice field basic philosophy genetic algorithm flowchart described step step numerical computation genetic algorithm solving simple mathematical equality problem briefly explained basic philosophy genetic algorithm developed goldberg inspired darwin theory evolution states survival organism affected rule strongest species survives darwin also stated survival organism maintained process reproduction crossover mutation darwin concept evolution adapted computational algorithm find solution problem called objective function natural fashion solution generated genetic algorithm called chromosome collection chromosome referred population chromosome composed genes value either numerical binary symbols characters depending problem want solved chromosomes undergo process called fitness function measure suitability solution generated problem chromosomes population mate process called crossover thus producing new chromosomes named offspring genes composition combination parent generation chromosomes also mutation gene number chromosomes undergo crossover mutation controlled crossover rate mutation rate value chromosome population maintain next generation selected based darwinian evolution rule chromosome higher fitness value greater probability selected next generation several generations chromosome value converges certain value best solution problem algorithm genetic algorithm process follows step determine number chromosomes generation mutation rate crossover rate value step generate number population initialization value genes random value step process steps number generations met step evaluation fitness value chromosomes calculating objective function step chromosomes selection step crossover step mutation step solution best chromosomes flowchart algorithm seen figure ith population chromosome chromosome solutions encoding chromosome chromosome evaluation selection next generation roulette wheel crossover mutation end best chromosome decoding best solution figure genetic algorithm flowchart numerical example examples applications use genetic algorithms solve problem combination suppose equality genetic algorithm used find value satisfy equation first formulate objective function problem objective minimizing value function since four variables equation namely compose chromosome follow speed computation restrict values variables integers step initialization example define number chromosomes population generate random value gene chromosomes chromosome chromosome chromosome chromosome chromosome chromosome step evaluation compute objective function value chromosome produced initialization step abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs abs step selection fittest chromosomes higher probability selected next generation compute fitness probability must compute fitness chromosome avoid divide zero problem value added fitness fitness fitness fitness fitness fitness total probability chromosomes formulated fitness total probabilities see chromosome highest fitness chromosome highest probability selected next generation chromosomes selection process use roulette wheel compute cumulative probability values calculated cumulative probability selection process using done process generate random number range follows random number greater smaller select chromosome chromosome new population next generation newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome newchromosome chromosome chromosomes population thus became chromosome chromosome chromosome chromosome chromosome chromosome example use point randomly select position parent chromosome exchanging parent chromosome mate randomly selected number mate chromosomes controlled using parameters crossover process follows begin population random select chromosome parent end end end chromosome selected parent suppose set crossover rate chromosome number selected crossover random generated value chromosome process follows first generate random number number population random number parents chromosome chromosome chromosome selected crossover chromosome chromosome chromosome chromosome chromosome chromosome chromosome selection next process determining position crossover point done generating random numbers length chromosome case generated random numbers get crossover point parents chromosome cut crossover point gens interchanged example generated random number get first crossover second crossover third crossover parent gens cut gen number gen number gen number respectively chromosome chromosome chromosome chromosome chromosome chromosome chromosome chromosome chromosome thus chromosome population experiencing crossover process chromosome chromosome chromosome chromosome chromosome chromosome step mutation number chromosomes mutations population determined parameter mutation process done replacing gen random position new value process follows first must calculate total length gen population case total length gen number population mutation process done generating random integer generated random number smaller variable marked position gen chromosomes suppose define expected population mutated number mutations suppose generation random number yield chromosome mutation chromosome number gen number chromosome gen number value mutated gens mutation point replaced random number suppose generated random number chromosome composition mutation chromosome chromosome chromosome chromosome chromosome chromosome finishing mutation process one iteration one generation genetic algorithm evaluate objective function one generation chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs chromosome abs abs abs evaluation new chromosome see objective function decreasing means better chromosome solution compared previous chromosome generation new chromosomes next iteration chromosome chromosome chromosome chromosome chromosome chromosome new chromosomes undergo process previous generation chromosomes evaluation selection crossover mutation end produce new generation chromosome next iteration process repeated predetermined number generations example running generations best chromosome obtained chromosome means use number problem equation see value variable generated genetic algorithm satisfy equality reference mitsuo gen runwei cheng genetic algorithms engineering design john wiley sons | 9 |
sep family two generator groups donghi lee makoto sakuma abstract construct groups specific presentation satisfies small cancellation conditions urm single relator upper presentation link group slope hmi continued fraction expansion every integer introduction recall group called hopfian every epimorphism automorphism property finitely generated groups close connection finiteness fact classical work due mal cev shows every finitely generated group finite one hardest open problems hyperbolic groups whether every hyperbolic group residually finite important progress problem given sela asserting every hyperbolic group hopfian osin proved problem equivalent question whether group residually finite hyperbolic relative finite collection residually finite subgroups notion relatively hyperbolic groups important generalization hyperbolic groups geometric group theory originally introduced gromov motivating examples generalization include fundamental groups hyperbolic manifolds finite volume particular every link complement except torus link hyperbolic manifold cusps fundamental group link group hyperbolic relative peripheral subgroups although hyperbolic group known groves finitely generated group hopfian hyperbolic relative free abelian subgroups also proved reinfeldt weidmann every hyperbolic group possibly torsion hopfian addition based result coulon guirardel mathematics subject classification primary first author supported basic science research program national research foundation korea nrf funded ministry education science technology second author supported jsps proved every lacunary hyperbolic group characterized direct limit hyperbolic groups certain radii condition also hopfian small cancellation groups known group finite presentation satisfies small cancellation conditions either hyperbolic see wise also proved every finite cancellation presentation defines residually finite group historically many known examples finitely generated nonhopfian groups specific presentations earliest example found neumann follows every integer soon first group finite presentation discovered higman follows also group simplest presentation produced baumslag solitar follows many groups specific finite presentations obtained generalizing higman group group see instance another notable group obtained ivanov storozhev constructed family finitely generated finitely presented relatively free groups direct limits hyperbolic groups although defining relations group presentations explicitly described terms generators motivated background construct groups using hyperbolic link groups detail construct family groups form satisfying small cancellation conditions uri single relator upper presentation link group link slope every rational numbers may parametrized explicit formula express uri terms parametrize rational numbers express continued fraction expansion note every rational number unique continued fraction expansion unless main result present paper following whose proof contained section theorem let let every integer group presentation satisfies small cancellation conditions symbol represents successive whereas means occur place remark allow components continued fraction expansion meaning two integers immediately added form one component theorem parametrized including every express rational number theorem relatively prime positive integers see section simple computation shows inequality holds every length word uri satisfies inequality every integer looking proof theorem section hard see similar result holds also integer greater thus state general form without detailed proof theorem suppose integer let let hmi every integer group presentation satisfies small cancellation conditions present paper organized follows section recall upper presentation link group basic facts established concerning upper presentations also recall key facts obtained applying small cancellation theory upper presentations section devoted proof main result theorem preliminaries upper presentations link groups recall notation conway sphere punctured sphere obtained quotient group generated around points let simple loop obtained projection line slope call slope simple loop link slope sum rational tangle slope rational tangle slope recall identified bound disks respectively theorem link group obtained follows let standard meridian generator pair described section identified free group basis rational number relatively prime positive integers let word obtained follows set greatest integer exceeding odd even represented simple loop obtain following presentation link groups presentation called upper presentation link group basic facts concerning upper presentations throughout paper cyclic word defined set cyclic permutations cyclically reduced word denote cyclic word associated cyclically reduced word also symbol denotes equality two words two cyclic words recall definitions basic facts needed proof theorem section definition let reduced word decompose positive negative subword letters positive negative exponents negative positive subword sequence positive integers called let cyclically reduced word decompose cyclic word positive negative subword negative positive subword taking subindices modulo cyclic sequence positive integers called double parentheses denote sequence considered modulo cyclic permutations definition rational number let word defined beginning section symbol denotes called slope reduced word said alternating appear alternately precise neither appears also cyclically reduced word said cyclically alternating cyclic permutations alternating particular cyclically alternating word note every alternating word determined sequence initial letter exponent note also cyclically alternating word either cyclic words remainder section suppose rational number write continued fraction expansion unless note properties differ according brevity write lemma proposition rational number satisfying following hold suppose suppose term either moreover two consecutive terms cyclic sequence positive integers hmi hmi hmi symbol hmi represents successive definition symbol denotes cyclic sequence lemma called slope lemma proposition corollary rational number let rational number defined lemma proposition rational number cyclic sequence decomposition satisfies following symmetric sequence obtained reversing order equal empty occurs twice cyclic sequence subsequence begins ends subsequence begins ends lemma proof proposition rational number let rational number defined lemma also let decompositions described lemma following hold hmi hmi hmi hmi hmi hmi hmi following lemma useful proof lemma lemma two distinct rational numbers assume positive integer integers greater every iii let rational numbers defined lemma also let decompositions described lemma suppose contains subsequence contains subsequence lemma throughout paper mean subsequence subsequence without leap namely sequence called subsequence cyclic sequence sequence representing cyclic sequence proof first suppose contains subsequence lemma contains hmi hmi subsequence clearly contains subsequence done next suppose contains subsequence lemma contains hmi hmi hmi hmi subsequence contains subsequence reminder proof show contains subsequence end note since lemma also since consists lemma hence either suppose first assumption thus possibility thus suppose next assumption note assumption iii thus see using lemma assumption every contains hmi subsequence implies contains term since possibility thus completing proof lemma small cancellation theory applied upper presentations subset free group called symmetrized elements cyclically reduced cyclic permutations also belong definition suppose symmetrized subset nonempty word called piece respect exist distinct small cancellation conditions integers defined follows see condition product pieces condition successive elements inverse pair mod least one products freely reduced without cancellation following proposition enables apply small cancellation theory upper presentation proposition theorem let rational number let symmetrized subset generated single relator group presentation satisfies proposition follows following characterization pieces turn proved using lemma lemma corollary let proposition subword cyclic word piece respect contains neither subsequence proof theorem section brevity notation sometimes write letter word quotient group free group two elements symbol means equality group using lemma let also let alternating word let homomorphism defined lemma foregoing notation let composition canonical surjection onto proof since suffices show contained image let alternating word since alternating words see also alternating word since letting cyclically alternating words see moreover since cyclic words lemma implies hence thus contained image required point set following notation used end proofs lemmas notation suppose alternating word sequence positive integers satisfying symbol denotes sequence suppose cyclically alternating word cyclic sequence positive integers satisfying symbol denotes cyclic sequence particular lemma suppose alternating word sequence positive integers satisfying defined symbol denotes sequence suppose cyclically alternating word cyclic sequence positive integers satisfying defined symbol denotes cyclic sequence particular lemma lemma foregoing notation proof recall clearly cyclic word six positive negative subwords length cutting middle subwords may write cyclic word product put every namely follows claim alternating word proof claim recall alternating words hard see letting alternating words clearly since finally required claim alternating word proof claim proof claim letting alternating words clearly since finally required claims follows moreover see alternating words implies following notation also furthermore since corresponding rational number see rational number rational number since consists furthermore since consists finally equals statement theorem completes proof lemma lemma foregoing notation uri every proof fix lemma consists without moreover since lemmas consists implies number occurrences two one two claim cutting cyclic word uri middle positive negative subwords length may write uri product one following proof claim note every alternating word consider graph figure vertex set equal edge endowed one two orientations observe initial terminal vertices respectively oriented edge graph word alternating word namely terminal subword corresponding last component initial subword corresponding first component amalgamated maximal positive negative alternating subword length moreover weight resp according whether vertex resp valence thus vnp vnj closed edge path graph compatible specified edge orientations compatible closed edge path brief namely vnj initial terminal vertices oriented edge graph indices considered modulo cyclically reduced word vnp cyclically alternating word cssequence tnp since weight tnj according whether vertex vnj valence see compatible closed edge path tnp corresponding cyclically alternating word consists isolated moreover cyclic sequence construct compatible closed edge path corresponding cyclically alternating word equal given cyclic sequence particular find compatible closed edge path corresponding cyclically alternating word equal uri implies uri figure proof claim proof lemma hence cyclic words lemma completes proof claim putting obviously every uri recall claims proof lemma alternating word alternating word follows uri moreover alternating words observe graph figure initial terminal alternating word vertices respectively oriented edge consists moreover components isolated observation yields also yields otherwise define number positive negative proper subwords length proper subword mean subword lies interior see since product cut middle positive negative subwords length also see since corresponding rational number hence uri rational number rational number consists since consists furthermore since finally equals statement theorem completes proof lemma since lemmas imply descends epimorphism show isomorphism let proof lemma letting alternating word lemma proof clearly since proof lemma cyclically alternating word babab equals implies namely required lemma foregoing notation let symmetrized subset generated set relators uri upper presentation satisfies proof since every element cyclically alternating clearly satisfies show satisfies begin setting notation recall lemma every rational number decomposition depending clarity write decomposition hand rational number symbol denotes rational number continued fraction expansion claim two integers cyclic word urj contain subword corresponding proof claim suppose contrary cyclic word urj contains subword corresponding first show assumption implies contains subsequence urj contains subword corresponding clearly contains subsequence assume urj contains subword corresponding contains subsequence since continued fraction expansions begin see begins ends lemma also consists lemma hence must therefore contains subsequence thus proved contains subsequence note lengths continued fraction expansions respectively hence apply lemma successively see contains subsequence every min since two cases case recall equal according whether otherwise since observe continued fraction expansion form consists cyclic sequence since contain must contain subsequence hence subsequence since occur lemma implies contradiction lemma since occur case case observe otherwise continued fraction expansion form contain term lemma since consists impossible claim see assertion lemma holds even symmetrized subset lemma replaced enlarged set current setting namely symmetrized subset generated set relators uri group presentation precise following hold claim subword cyclic word piece respect symmetrized subset lemma contains neither subsequence using claim see proof corollary cyclic word product less pieces respect hence satisfies lemma foregoing notation proof suppose contrary reduced van kampen diagram see since lemma contains subword product pieces respect symmetrized subset lemma see section implies must contain term contradiction fact consists lemma together lemma shows epimorphism isomorphism consequently proof theorem completed references baumslag solitar groups bull amer math soc coulon guirardel automorphisms endomorphisms lacunary hyperbolic groups bowditch relatively hyperbolic groups int algebra comput farb relatively hyperbolic groups geom funct anal gromov hyperbolic groups essays group theory gersten msri publ springer groves limit groups relatively hyperbolic groups diagrams geom topol higman finitely related group isomorphic proper factor group london math soc ivanov storozhev relatively free groups geom dedicata lee sakuma epimorphisms link groups homotopically trivial simple loops spheres proc london math soc lee sakuma homotopically equivalent simple loops spheres link complements geom dedicata lyndon schupp combinatorial group theory berlin mal cev faithful representation infinite groups matrices mat neumann group isomorphic proper factor group london math soc osin peripheral fillings relatively hyperbolic groups invent math osin relatively hyperbolic groups intrinsic geometry algebraic properties algorithmic problems memoirs amer math soc reinfeldt limit groups diagrams hyperbolic groups phd thesis university reinfeldt weidmann diagrams hyperbolic groups preprint updated sapir wise ascending hnn extensions residually finite groups nonhopfian finite quotients pure appl algebra sela endomorphisms hyperbolic groups hopf property topology strebel appendix small cancellation groups sur les groupes hyperboliques mikhael gromov papers swiss seminar hyperbolic groups held bern ghys harpe editors progr vol boston boston wise automatic group algebra wise research announcement structure groups quasiconvex hierarchy electron res announc math sci department mathematics pusan national university pusan korea address donghi department mathematics graduate school science hiroshima university japan address sakuma | 4 |
transponder configuration elastic optical networks may mohammad hadi member ieee mohammad reza pakravan member ieee propose procedure transponder configuration elastic optical networks quality service physical constraints guaranteed joint optimization transmit optical power temporal spatial spectral variables addressed use geometric convexification techniques provide convex representations quality service transponder power consumption transponder configuration problem simulation results show convex formulation considerably faster nonlinear counterpart ability optimize transmit optical power reduces total transponder power consumption also analyze effect mode coupling number available modes power consumption different network elements optimization green communication elastic optical networks fibers mode coupling ntroduction temporally spectrally spatially elastic optical network eon widely acknowledged next generation high capacity transport system optical society focused architecture network resource allocation techniques eons provide network configuration adaptive resource allocation according communication demands physical conditions higher energy efficiency orthogonal frequency division multiplex ofdm signaling reported nominates ofdm main technology resource provisioning resources time spectrum hand enabling technologies fibers fmfs fibers mcfs used increase network capacity efficiency resource allocation spatial dimension although many variants algorithms proposed resource allocation eons joint assignment temporal spectral spatial resources eons needs much research study among available works eons focused fundamental requirement future optical networks moreover available approaches consider transmit optical power optimization variable results inefficient network provisioning flexible resource allocation problem usually decomposed several lower complexity following approach decompose resource allocation problem routing ordering ros transponder configuration subproblem tcs mainly focus tcs complex consider fmf simple amplifier structure easier fusion process lower nonlinear effects lower manufacturing cost compared multiplexed sdm optical fibers tcs optimally configure transponder parameters modulation level number coding rate transmit optical power number active modes central frequency total transponders power consumption minimized quality service qos physical constraints met unlike conventional approach provide convex expressions transponder power consumption optical signal noise ratio osnr indicator qos use results formulate tcs convex optimization problem efficiently solved using fast convex optimization algorithms consider transmit optical power optimization variable show important impact total transponder power consumption simulation results show convex formulation solved almost times faster nonlinear program minlp counterpart optimizing transmit optical power also improves total transponder power consumption factor european optical network aggregate traffic tbps analyze effect mode coupling power consumption different network elements simulation results show total network power consumption reduced using stronglycoupled fmfs rather ones numerical outcomes also demonstrate increasing number available modes fmfs provides fft dsp power consumption overall transponder power consumption descending function number available modes ystem odel consider coherent optical communication network characterized topology graph sets optical nodes directional optical fmf links respectively optical fmfs modes gridless bandwidth set connection requests shows set requests sharing fmf routes request assigned contiguous bandwidth around carrier frequency modulates modes available modes assigned contiguous bandwidth includes ofdm space feasible mimo processing remaining unused modes request shared among others assume assigned bandwidths continuous routes remove high cost conversion request passes fiber spans along path shared spans request fmf span fixed length lspn mode demux mimo adc fft dec adc fft dec adc fft dec mode mux optical mixer optical mixer dac ifft enc dac ifft enc dac ifft enc fig block diagram pair transmit receive transponders available modes optical amplifier compensate attenuation modulation levels coding rates pair requires minimum osnr get ber value transponder given modulation level coding rate injects optical power active mode polarization chromatic dispersion mode coupling signal broadenings respectively proportional coefficients flspn lspn chromatic dispersion factor lsec product rms uncoupled group delay spread section length transponders add sufficient cyclic prefix ofmd symbol resolve signal broadening induced mode coupling chromatic dispersion transponders maximum information bit rate also guard band two adjacent requests link considering architecture fig power consumption pair transmit receive transponders calculated follows ptrb pdsp ptrb transmit receive transponder bias term pedc scaling coefficient encoder decoder power consumptions denotes power consumption two point fft operation pdsp power consumption scaling coefficient receiver dsp mimo operations green eon need resource allocation algorithm determine values system model variables transponders consume minimum power physical constrains satisfied desired levels osnr guaranteed general problem modeled nphard minlp optimization problem simplify problem provide solution resource allocation problem usually decomposed two ros routing ordering requests link defined tcs transponders configured usually search near optimal solution involves iterations two save iteration time great interest hold running time minimum value work mainly focus tcs formulate convex problem benefit fast convex optimization algorithms complete study ros one refer iii ransponder onfiguartion roblem minlp formulation tcs follows min variable vectors transponder configuration parameters modulation level number subcarriers coding rate transmit optical power number active modes central frequency mba shows set integer numbers goal minimize total transponder power consumption obtained using constraint qos constraint forces osnr greater required minimum threshold nonlinear function value related constraint nonoverlappingguard constraint prevents two requests sharing frequency spectrum function shows request occupies assigned spectrum bandwidth link values determined solving ros constraint holds assigned central frequencies within acceptable range fiber spectrum last constraint guarantees transponder convey input traffic rate wasted cyclic prefix times considered generally problem complex minlp easily solved reasonable time therefore use geometric convexification techniques convert minlp convex optimization problem use relaxation method solve convex problem first provide generalized posynomial expression optimization define variable change convexify problem posynomial expression osnr request eons proposed simply consider active mode independent source nonlinearity incoherently add interferences therefore extended version posynomial osnr expression mqq nsp spontaneous emission factor light frequency planck constant attenuation coefficient dispersion factor nonlinear constant furthermore distance carrier frequencies equals use posynomial curve fitting osnr threshold values following approach arrive new representation optimization problem min ignoring constraints penalty term goal function formulation equivalent geometric program previous minlp expressions mentioned posynomial curve fitting used qos constraint constraints penalty term added guarantee implicit equality constraint also needed convert generalized posynomial qos constraint valid geometric expression explained consider following variable change applying variable change goal function difficult part variable change ptrb emq pdsp clearly emq convex variable domain use expression provide convex approximation remaining term approximation relative error less practical values consequently function nonnegative weighted sum convex functions also convex statement without approximation applied show convexity constraints variable change constraints need apply extra log sides inequality solve problem relaxed continuous version proposed convex formulation iteratively optimized loop epoch continuous convex optimization solved obtained values relaxed integer variables rounded given precision fix acceptable rounded variables solve relaxed continuous convex problem loop continues untill integer variables valid values number iterations equal practice usually less number integer variables furthermore simpler problem solved number iteration increases integer variables fixed loop umerical esults section use simulation results demonstrate performance convex formulation tcs european optical network considered topology traffic matrix given simulation constant parameters lspn thz nsp mhz ghz thz ptrb pedc pdsp use matlab yalmip cvx software packages programming modeling optimization total power consumption different network elements terms aggregate traffic without adaptive transmit optical power assignment reported fig used proposed approach fixed assignment transmit optical power clearly elements total power consumption approximately linear function aggregate traffic slope lines lower transmit optical powers adaptively assigned example adaptive transmit optical power assignment improves total transponder power consumption factor aggregate traffic tbps fig shows total power consumption different network elements versus number available modes fmfs power consumption values normalized corresponding values scenario single mode fibers increases amount transponder power consumption decreases considerable gain moreover tradeoff dsp fft power consumption overall transponder power consumption decreasing function number available modes fig shows power consumption different network elements terms aggregate traffic fmfs obviously total transponder power consumption considerably reduced fmfs group delay spread proportional square root path lengths comparison fmfs group delay spread proportional path lengths results published example improvement aggregate traffic tbps numerical outcomes also show convex formulation times faster nonlinear counterpart compatible results reported onclusion resource allocation quality service provisioning fundamental problem green fmfbased elastic optical networks paper decompose resource allocation problem two routing traffic ordering transponder configuration mainly focus transponder configuration provide convex formulation joint optimization total total total total total total total total temporal spectral spatial resources along optical transmit power considered simulation results show formulation considerably faster nonlinear counterpart ability optimize transmit optical power improve total transponder power consumption demonstrate tradeoff dsp fft power consumptions number modes fmfs increases overall transponder power consumption descending function number available modes also calculate power consumption different network elements show fmfs reduce power consumption elements power consumption fixed transmit power power consumption fixed transmit power dsp power consumption fixed transmit power transponder power consumption fixed transmit power power consumption adaptive transmit power power consumption adaptive transmit power dsp power consumption adaptive transmit power transponder power consumption adaptive transmit power eferences aggregate tbps fig total power consumption different network elements terms aggregate traffic without adaptive transmit optical power assignment normalized normalized normalized normalized total total total total transponder power consumption power consumption power consumption dsp power consumption number available modes fmfs fig normalized total power consumption different network elements terms number available modes fmfs total total total total total total total total power consumption fmf power consumption fmf dsp power consumption fmf transponder power consumption fmf power consumption fmf power consumption fmf dsp power consumption fmf transponder power consumption fmf aggregate tbps fig total power consumption different network elements terms aggregate traffic fmfs proietti elastic optical networking temporal spectral spatial domains ieee communications magazine vol khodakarami flexible optical networks energy efficiency perspective journal lightwave technology vol saridis survey evaluation space division multiplexing technologies optical networks ieee communications surveys tutorials vol chatterjee sarma oki routing spectrum allocation elastic optical networks tutorial ieee communications surveys tutorials vol muhammad resource allocation multiplexing optical white box versus optical black box networking journal lightwave technology vol winzer optical transport capacity scaling spatial multiplexing ieee photonics technology letters vol yan joint assignment power routing spectrum static networks journal lightwave technology hadi pakravan resource allocation elastic optical networks using convex optimization arxiv preprint khodakarami pillai shieh quality service provisioning energy minimized scheduling software defined flexible optical networks journal optical communications networking vol yan resource allocation optical networks nonlinear channel model journal optical communications networking vol hadi pakravan resource allocation elastic optical networks using geometric optimization arxiv preprint kahn mode coupling impact spatially multiplexed systems optical fiber telecommunications vol hadi pakravan bvwxc placement elastic optical networks ieee photonics journal askarov kahn adaptive equalization multiplexing systems journal lightwave technology vol boyd tutorial geometric programming optimization engineering vol gao analytical expressions nonlinear transmission performance coherent optical ofdm systems frequency guard band journal lightwave technology vol | 7 |
sep annals statistics vol doi institute mathematical statistics moderate deviations studentized applications jinyuan southwestern university finance economics university melbourne chinese university hong princeton widely used broad range applications including fields biostatistics econometrics paper establish sharp moderate deviation theorems studentized general framework including studentized test statistic prototypical examples particular refined moderate deviation theorem accuracy established results extend applicability existing statistical methodologies onesample general nonlinear statistics applications tribute peter brilliant prolific researcher made enormously influential contributions mathematical statistics probability theory peter extraordinary knowledge analytic techniques often applied ingenious simplicity tackle complex statistical problems work service profound impact statistics statistical community peter generous mentor friend warm heart keen help young generation jinyuan chang zhou extremely grateful opportunity learn work peter last two years university melbourne even final year afforded time guide always treasure time spent shao grateful helps supports peter provided various stages career peter dearly missed forever remembered mentor friend received june supported part fundamental research funds central universities grant nsfc grant center statistical research swufe australian research council supported hong kong research grants council grf supported nih grant grant australian research council ams subject classifications primary secondary key words phrases bootstrap false discovery rate test multiple hypothesis testing moderate deviation studentized statistics twosample electronic reprint original article published institute mathematical statistics annals statistics vol reprint differs original pagination typographic detail chang shao zhou multiple testing problems false discovery rate control regularized bootstrap method also discussed introduction one commonly used nonlinear nonparametric statistics asymptotic theory well studied since seminal paper hoeffding extend scope parametric estimation complex nonparametric problems provide general theoretical framework statistical inference refer koroljuk borovskich systematic presentation theory kowalski recently discovered methods contemporary applications applications also found high dimensional statistical inference estimation including simultaneous testing many different hypotheses feature selection ranking estimation high dimensional graphical models sparse high dimensional signal detection context high dimensional hypothesis testing example several new methods based proposed studied chen qin chen zhang zhong zhong chen moreover zhong zhu employed construct independence feature screening procedures analyzing ultrahigh dimensional data due heteroscedasticity measurements across disparate subjects may differ significantly scale feature standardize scale unknown nuisance parameters always involved natural approach use studentized statistics noteworthy advantage studentization compared standardized statistics studentized ratios take heteroscedasticity account robust data theoretical numerical studies delaigle hall jin chang tang evidence importance using studentized statistics high dimensional data analysis noted delaigle hall jin careful study moderate deviations studentized ratios indispensable understanding common statistical procedures used analyzing high dimensional data known theory moderate deviations studentized statistics quantifies accuracy estimated crucial study multiple tests controlling false discovery rate fan hall yao liu shao particular moderate deviation results used investigate robustness accuracy properties critical values multiple testing procedures however thus far applications confined fan hall yao wang hall delaigle hall jin cao kosorok conjectured fan hall yao analogues theoretical properties studentized statistical methodologies remain valid resampling methods based studentized statistics motivated applications attempting develop unified theory moderate deviations general studentized nonlinear statistics particular asymptotic properties standardized extensively studied literature whereas significant developments achieved past decade studentized refer wang jing zhao references therein bounds edgeworth expansions results moderate deviations found vandemaele veraverbeke lai shao wang shao zhou results shao zhou paved way applications statistical methodologies using studentized high dimensional data analysis also commonly used compare different treatment effects two groups experimental group control group scientifically controlled experiments however due structural complexities theoretical properties statistics well studied paper establish moderate deviation theorem general framework studentized especially studentized test particular refined moderate deviation theorem accuracy established tstatistic paper organized follows section present main results moderate deviations studentized statistics well refined result section investigate statistical applications theoretical results problem simultaneously testing many different hypotheses based particularly studentized tests section shows numerical studies discussion given section proofs relegated supplementary material chang shao zhou moderate deviations studentized use following notation throughout paper two sequences real numbers write exist two positive constants write constant holds sufficiently large write respectively moreover two real numbers write ease presentation max min chang shao zhou review studentized start brief review moderate deviation studentized integer let independent identically distributed random variables taking values metric space let symmetric borel measurable function hoeffding kernel degree defined xis unbiased estimate particular focus case euclidean space integer write xir let var var assume standardized nondegenerate given usually unknown interested following studentized denotes jackknife estimator given shao zhou established general moderate deviation theorem studentized nonlinear statistics particular studentized studentized theorem assume suppose constants exist constants depending holds uniformly min max particular holds uniformly condition satisfied large class examples statistic sample variance gini mean difference wilcoxon statistic kendall kernel function studentized let two independent random samples drawn probability distribution drawn another probability distribution two positive integers let kernel function order real symmetric first variates last variates known nonsymmetric kernel always replaced symmetrized version averaging across possible rearrangements indices set let chang shao zhou lighten notation write yjk yjk define also let var var var standardized form uniform bound order obtained helmers janssen borovskich using concentration inequality approach chen shao proved refined uniform bound also established optimal nonuniform bound large deviation asymptotics refer nikitin ponikarov references therein interested following studentized note jackknife estimators respectively studentized let moderate deviations moreover put var given following result gives moderate deviation mild assumptions proof found supplementary material chang shao zhou assume constants theorem given assume finite exist constants independent holds uniformly min max particular holds uniformly theorem exhibits dependence range uniform convergence relative error central limit theorem optimal moment conditions particular region becomes chang shao zhou see theorem jing shao wang similar results sums higher order moment conditions clear technique adapted provide better approximation lying order tail probability also worth noticing many commonly used kernels nonparametric statistics turn linear combinations indicator functions therefore satisfy condition immediately prototypical example significant interest due wide applicability advantage using either twosample high degree robustness data sampling distribution finite third fourth moment robustness useful high dimensional data analysis sparsity assumption signal interest dealing two experimental groups typically independent scientifically controlled experiments one commonly used statistics hypothesis testing constructing confidence intervals difference means two groups let random sample population mean variance let random sample another population mean variance independent defined following result direct consequence theorem theorem assume exist absolute constants holds uniformly studentized motivated series recent studies effectiveness accuracy testing using investigate whether higher order expansion relative error theorem wang sums holds one use bootstrap calibration correct skewness fan hall yao delaigle hall jin study power properties sparse alternatives wang hall following theorem gives refined moderate deviation result whose proof placed supplementary material chang shao zhou theorem assume let third central moment respectively moreover assume exp holds uniformly min min every refined moderate deviation theorem tstatistic established wang knowledge best result known date equivalently sums examples beyond enumerate three refer nikitin ponikarov examples let two independent random samples population distributions respectively example test statistic order defined kernel chang shao zhou view particular example lehmann statistic defined kernel order particular example kochar statistic kochar statistic constructed kochar test two hazard failure rates different denote class absolutely continuous cumulative distribution functions cdf satisfying two arbitrary cdf let densities thus hazard failure rates defined long positive kochar considered problem testing null hypothesis alternative strict inequality set nonzero measures observe holds strict inequality set nonzero measures recall two independent samples drawn respectively following nikitin ponikarov see kernel order given yyxx xyyx xxyy yxxy studentized term yyxx refers similar treatments apply xyyx xxyy yxxy particular multiple testing via studentized tests testing occurs wide range applications including dna microarray experiments functional magnetic resonance imaging analysis fmri astronomical surveys refer dudoit van der laan systematic study existing multiple testing procedures section consider testing based studentized tests show theoretical results previous section applied problems typical application testing high dimensions analysis gene expression microarray data see whether gene isolation behaves differently control group versus experimental group apply assume statistical model given index denotes kth gene indicate ith jth array constants respectively represent mean effects kth gene first second groups independent random variables kth marginal test mean zero variance unequal twhen population variances statistic commonly used carry hypothesis testing null alternative since seminal work benjamini hochberg benjamini hochberg procedure become popular technique microarray data analysis gene selection along many procedures depend often need estimated control certain simultaneous errors shown using approximated asymptotically equivalent using true controlling kfamilywise error rate false discovery rate fdr see example kosorok fan hall yao liu shao tests cao kosorok proposed alternative method control fdr chang shao zhou common thread among aforementioned literature theoretically methods work controlling fdr given level number features sample size satisfy log recently liu shao proposed regularized bootstrap correction method multiple constraint may relaxed log less stringent moment conditions assumed fan hall yao delaigle hall jin using theorem show constraint large scale relaxed log well provides theoretical justification effectiveness bootstrap method frequently used skewness correction illustrate main idea restrict attention special case observations independent indeed test statistics correlated false discovery control becomes challenging arbitrary dependence various dependence structures considered literature see example benjamini yekutieli storey taylor siegmund ferreira zwinderman leek storey friguet kloareg causeur fan han among others completeness generalize results dependent case section normal calibration phase transition consider significance testing problem versus let denote respectively number false rejections number total rejections false discovery proportion fdp defined ratio fdp max fdr expected fdp max benjamini hochberg proposed method choosing threshold controls fdr prespecified level let marginal kth test let order statistics predetermined control level procedure rejects hypotheses max microarray analysis often used identify differentially expressed genes two groups let studentized independent random samples respectively generated according model usually practice moreover assume sample sizes two samples order stating main results first introduce number notation set let denote number true null hypotheses allowed grow increases assume lim line notation used section set var var throughout subsection focus normal calibration let pbk standard normal distribution function indeed exact null distribution thus true unknown without normality assumption theorem assume independent nondegenerate random variables log independent random samples suppose max max min min constants moreover assume log let lim inf chang shao zhou suppose log suppose log log exists constant lim lim inf log iii suppose log denote respectively fdr fdp procedure replaced pbk together conclusions theorem indicate number simultaneous tests large exp normal calibration becomes inaccurate particular skewness parameter given reduces lim inf noted liu shao limiting behavior varies different regimes exhibits interesting phase transition phenomena dimension grows function average skewness plays crucial role also worth noting conclusions iii hold scenario corresponds sparse settings applications gene detections finite moments robustness accuracy normal calibration control investigated cao kosorok corresponds relatively dense setting sparse case considered covered bootstrap calibration regularized bootstrap correction subsection first use conventional bootstrap calibration improve accuracy fdr control based fact bootstrap approximation removes skewness term determines inaccuracies standard normal approximation however validity bootstrap approximation requires underlying distribution light tailed seem realistic real data applications pointed literature gene study many gene data commonly recognized heavy tails violates assumption underlying distribution used make conventional bootstrap approximation work recently liu shao proposed regularized bootstrap method shown robust heavy tailedness underlying distribution dimension allowed large exp studentized let denote bootstrap samples drawn independently uniformly replacement respectively let constructed following liu shao use following empirical distribution approximate null distribution thus estimated given pbk respectively fdpb fdrb denote fdp fdr procedure replaced pbk following result shows bootstrap calibration accurate provided log increases strictly slower rate underlying distribution tails theorem assume conditions theorem hold max max constants suppose log fdpb fdrb suppose log fdpb fdrb condition theorem quite stringent practice whereas hardly weakened general bootstrap method applied context error rate control fan hall yao proved bootstrap calibration accurate observed data bounded log regularized bootstrap method however adopts similar idea trimmed estimators twostep procedure combines truncation technique bootstrap method first define trimmed samples ybj regularized parameters specified let xbk corresponding bootstrap samples drawn sampling randomly replacement ybk ybn xbk chang shao zhou respectively next let tbk statistic constructed previous procedure define estimated pbk fbm fbm let fdprb fdrrb denote fdp fdr respectively procedure replaced pbk theorem assume conditions theorem hold max max regularized parameters log log suppose log fdprb fdrrb suppose log fdprb fdrrb view theorem regularized bootstrap approximation valid mild moment conditions significantly weaker required bootstrap method work theoretically numerical performance investigated section highlight main idea proof theorem given supplementary material chang shao zhou proofs theorems based straightforward extensions theorems liu shao thus omitted fdr control dependence section generalize results previous sections dependence case write define every let cov cov characterizes dependence see corr ularly corr subsection impose following conditions dependence structure studentized exist constants max max corr log corr log exist constants number variables dependent less assumption imposes constraint magnitudes correlations natural sense correlation matrix singular condition allowed moderately correlated many vectors condition enforces local dependence structure data saying vector dependent many random vectors independent remaining ones following theorem extends results previous sections dependence case proof placed supplementary material chang shao zhou theorem assume either condition holds log condition holds log suppose satisfied suppose satisfied fdprb fdrrb particular assume condition holds log fdprb fdrrb studentized test let two independent random samples distributions respectively let consider null hypothesis alternative problem arises many applications including testing whether physiological performance active drug better control treatment testing effects policy unemployment insurance vocational training program level unemployment chang shao zhou test mann whitney also known wilcoxon test wilcoxon prevalently used testing equality means medians serves nonparametric alternative corresponding test statistic given test widely used wide range fields including statistics economics biomedicine due good efficiency robustness parametric assumptions articles published experimental economics use test okeh reported thirty percent articles five biomedical journals published used test example using test charness gneezy developed experiment test conjecture financial incentives help foster good habits recorded seven biometric measures weight body fat percentage waist size etc participant experiment assess improvements across treatments although test originally introduced rank statistic test distributions two related samples identical prevalently used testing equality medians means sometimes alternative argued formally examined recently chung romano test generally misused across disciplines fact test valid underlying distributions two groups identical nevertheless purpose test equality distributions recommended use statistic smirnov mises statistic captures discrepancies entire distributions rather individual parameter specifically test recognizes deviation much power detecting overall distributional discrepancies alternatively test frequently used test equality medians however chung romano presented evidence another improper application test suggested use studentized median test even test appropriately applied testing asymptotic variance depends underlying distributions unless two population distributions identical hall wilson pointed application resampling pivotal statistics better asymptotic properties sense rate convergence actual significance level nominal significance level rapid studentized pivotal statistics resampled therefore natural use studentized test asymptotic pivotal let denote studentized test statistic dealing samples large number geographical regions suburbs states health service areas etc one may need make many statistical inferences simultaneously suppose observe family paired groups index denotes kth site assume drawn independently drawn test null hypothesis alternative rejected conclude treatment effect drug policy acting within kth area define test statistic constructed kth paired samples according let standard normal random variable true pbk denote estimated based normal calibration identify areas treatment effect acting use method control fdr level rejecting null hypotheses indexed pbk max denote ordered values let fdr method based normal calibration alternative normal calibration also consider bootstrap tion recall two bootstrap samples drawn independently uniformly replacement spectively let bootstrapped test statistic chang shao zhou constructed analogues given specified via replacing respectively using empirical distribution function predewe estimate unknown pbk termined null hypotheses indexed pbk rejected max pbk denote fdrb fdr method based bootstrap calibration applying general moderate deviation result studentized leads following result proof based whitney statistics straightforward adaptation arguments used proof theorem hence omitted theorem assume independent random variables continuous distribution functions triplet log independent samples suppose min constant log var var fdpb fdrb attractive properties bootstrap testing first noted hall case mean rather studentized counterpart rigorously proved bootstrap methods particularly effective relieving skewness extreme tails leads accuracy fan hall yao delaigle hall jin interesting challenging investigate whether advantages bootstrap inherited multiple either standardized studentized case studentized numerical study section present numerical investigations various calibration methods described section applied multiple testing problems refer simulation studentized test respectively assume observe two groups dimensional gene expression data independent random samples drawn distributions respectively let two sets random variables components noise vectors follow two types distributions exponential distribution exp density function student degrees freedom exponential distribution nonzero skewness symmetric type error distribution cases homogeneity heteroscedasticity considered detailed settings error distributions specified table assume satisfy two sets random variables consider several distributions error terms standard normal distribution uniform distribution beta distribution beta table reports four settings used simulation either setting know holds hence power null hypothesis generate magnitude difference kth components set assume first components equal log rest zero denote variance table distribution settings exponential distributions student homogeneous case heteroscedastic case exp exp exp exp chang shao zhou table distribution settings identical distributions nonidentical distributions case case beta parameter employed characterize location discrepancy distributions sample size set discrepancy parameter took values significance level procedure specified dimension set compared three different methods calculate procedure normal calibration given section bootstrap calibration regularized bootstrap calibration proposed section regularized bootstrap calibration used approach section liu shao choose regularized parameters compared performance normal calibration bootstrap calibration proposed section compared method evaluated performance via two indices empirical fdr proportion among true alternative hypotheses rejected call latter correct rejection proportion empirical fdr low proposed procedure good fdr control correct rejection proportion high proposed procedure fairly good performance identifying true signals ease exposition report simulation results figures results similar found supplementary material chang shao zhou curve corresponds performance certain method line types specified caption horizontal ordinates four points curve depict empirical fdr specified method level procedure taken respectively vertical ordinates indicate corresponding empirical correct rejection proportion say method good fdr control horizontal ordinates four points performance curve less prescribed levels general shown figures procedure based regularized bootstrap calibration better fdr control based normal calibration errors symmetric follow student panels first row figure show procedures using three calibration methods studentized fig performance comparison procedures based three calibration methods first second rows show results components noise vectors follow exponential distributions respectively left right panels show results homogeneous heteroscedastic cases respectively horizontal vertical axes depict empirical false discovery rate empirical correct rejection proportion respectively prescribed levels indicated unbroken horizontal black lines panel dashed lines unbroken lines represent results discrepancy parameter respectively different colors express different methods employed calculate procedure blue line green line red line correspond procedures based normal conventional regularized bootstrap calibrations respectively able control approximately control fdr given levels procedures based bootstrap regularized bootstrap calibrations outperform based normal calibration controlling fdr errors asymmetric performances three procedures different symmetric cases second row figure see procedure based normal calibration distorted controlling fdr procedure based regularized bootstrap calibration still able control fdr given levels chang shao zhou fig performance comparison procedures based two different calibration methods first second rows show results components noise vectors follow distributions specified cases table respectively left right panels show results cases identical distributions nonidentical distributions respectively horizontal vertical axes depict empirical false discovery rate empirical correct rejection proportion respectively prescribed levels indicated unbroken horizontal black lines panel dashed lines unbroken lines represent results discrepancy parameter respectively different colors express different methods employed calculate procedure blue line red line correspond procedures based normal bootstrap calibrations respectively phenomenon evidenced figure comparing procedures based conventional regularized bootstrap calibrations find former approach uniformly conservative latter controlling fdr words procedure based regularized bootstrap identify true alternative hypotheses using conventional bootstrap calibration phenomenon also revealed heteroscedastic case discrepancy parameter gets larger signal stronger correct rejection proportion studentized cedures based three calibrations increase empirical fdr closer prescribed level discussion paper established moderate deviations studentized arbitrary order general framework kernel necessarily bounded typified test statistic widely used broad range scientific research many applications rely misunderstanding tested implicit underlying assumptions explicitly considered relatively recently chung romano importantly provided evidence advantage using studentized statistics theoretically empirically unlike conventional asymptotic behavior studentized counterparts barely studied literature particularly case recently shao zhou proved moderate deviation theorem general studentized nonlinear statistics leads sharp moderate deviation result studentized however extension onesample studentized case totally nonstraightforward requires delicate analysis studentizing quantities proved moderate deviation secondorder accuracy finite moment condition see theorem independent interest contrast case reduced sum independent random variables thus existing results ratios jing shao wang wang directly applied instead modify theorem shao zhou obtain precise expansion used derive refined result finally show obtained moderate deviation theorems provide theoretical guarantees validity including robustness accuracy normal conventional bootstrap regularized bootstrap calibration methods multiple testing control dependence case also covered results represent useful complement obtained fan hall yao delaigle hall jin liu shao case acknowledgements authors would like thank peter hall aurore delaigle helpful discussions encouragement authors sincerely thank editor associate editor three referees constructive suggestions comments led substantial improvement paper chang shao zhou supplementary material supplement moderate deviations studentized twosample applications doi supplemental material contains proofs theoretical results main text including theorems additional numerical results references benjamini hochberg controlling false discovery rate practical powerful approach multiple testing stat soc ser stat methodol benjamini yekutieli control false discovery rate multiple testing dependency ann statist borovskich asymptotics von mises functionals soviet math dokl cao kosorok simultaneous critical values high dimensions bernoulli chang shao zhou supplement moderate deviations studentized chang tang marginal empirical likelihood sure independence feature screening ann statist chang tang local independence feature screening nonparametric semiparametric models marginal empirical likelihood ann statist charness gneezy incentives exercise econometrica chen qin test data applications testing ann statist chen shao normal approximation nonlinear statistics using concentration inequality approach bernoulli chen zhang zhong tests covariance matrices amer statist assoc chung romano exact asymptotically robust permutation tests ann statist chung romano asymptotically valid exact permutation tests based statist plann inference delaigle hall jin robustness accuracy methods high dimensional data analysis based student stat soc ser stat methodol dudoit van der laan multiple testing procedures applications genomics springer new york fan hall yao many simultaneous hypothesis tests normal student bootstrap calibration applied amer statist assoc fan han estimating false discovery proportion arbitrary covariance dependence amer statist assoc ferreira zwinderman method ann statist studentized friguet kloareg causeur factor model approach multiple testing dependence amer statist assoc hall relative performance bootstrap edgeworth approximations distribution function multivariate anal hall wilson two guidelines bootstrap hypothesis testing biometrics helmers janssen theorem multivariate math cent mathematisch centrum amsterdam hoeffding class statistics asymptotically normal distribution ann math statistics jing shao wang large deviations independent random variables ann probab kochar comparison two probability distributions reference hazard rates biometrika koroljuk borovskich theory mathematics applications kluwer academic dordrecht kosorok marginal asymptotics large small paradigm applications microarray data ann statist kowalski modern applied wiley hoboken lai shao wang type moderate deviations studentized esaim probab stat leek storey general framework multiple testing dependence proc natl acad sci usa zhong zhu feature screening via distance correlation learning amer statist assoc peng zhang zhu robust rank correlation based screening ann statist liu shao moderate deviation maximum periodogram application simultaneous tests gene expression time series ann statist liu shao phase transition regularized bootstrap largescale false discovery rate control ann statist mann whitney test whether one two random variables stochastically larger ann math statistics nikitin ponikarov large deviations nondegenerate applications bahadur efficiency math methods statist okeh statistical analysis application wilcoxon whitney test medical research studies biotechnol molec biol rev shao zhou type moderate deviation theorems processes bernoulli storey taylor siegmund strong control conservative point estimation simultaneous conservative consistency false discovery rates unified approach stat soc ser stat methodol vandemaele veraverbeke type large deviations studentized metrika wang limit theorems large deviation electron probab electronic chang shao zhou wang refined large deviations independent random variables theoret probab wang hall relative errors central limit theorems student statistic applications statist sinica wang jing zhao bound studentized statistics ann probab wilcoxon individual comparisons ranking methods biometrics zhong chen tests regression coefficients factorial designs amer statist assoc chang school statistics southwestern university finance economics chengdu sichuan china school mathematics statistics university melbourne parkville victoria australia shao department statistics chinese university hong kong shatin hong kong qmshao zhou department operations research financial engineering princeton university princeton new jersey usa school mathematics statistics university melbourne parkville victoria australia wenxinz | 10 |
unified method first third person action recognition ali javidani ahmad department computer science engineering shahid beheshti university tehran iran cyberspace research center shahid beheshti university tehran iran classification human action recognition deep learning convolutional neural network cnn optical flow motion recognizing activities highly challenging due fact circumstances camera recording actions completely different first third person videos exist two main approaches classifying group best knowledge exist unified method works perfectly main motivation work provide unified framework classify first videos toward goal two complementary streams designed capture motion appearance features video data motion stream based calculating optical flow images estimate motion video following time using pot representation method different pooling operators motion dynamics extracted efficiently appearance stream obtained via describing middle frame input video utilizing networks method evaluated two different datasets dogcentric demonstrated proposed method achieves high accuracy datasets introduction related works video recognition one popular fields artificial intelligence aims detect recognize ongoing events videos help humans inject vision robots order assist different situations instance one prominent applications video classification cars going become available market totally two main categories videos researchers conduct experiments videos videos times camera located specific place without movement scarcely slight movement records actions humans videos person wears camera involves directly events reason videos full general two major approaches classifying videos traditional modern approaches traditional ones based descriptors try detect different aspects action first step features video segments extracted features interest points dense points obtained raw input frames one ways obtain corner points video feature points described handcrafted descriptors hog hof mbh describe features effectively descriptors extended dimensions incorporate temporal information calculations two popular ones paper new video classification methodology proposed applied first third person videos main idea behind proposed strategy capture complementary information appearance motion efficiently performing two independent streams videos first stream aimed capture motions shorter ones keeping track elements optical flow images changed time optical flow images described networks trained large scale image datasets set time series obtained aligning descriptions beside extracting motion features time series pot representation method plus novel pooling operator followed due several advantages second stream accomplished extract appearance features vital case video classification proposed method evaluated first datasets results present proposed methodology reaches state art successfully raw frames optical flow motion feature vector description description final feature vector pot time series gradientvariance svm description middle frame description appearance feature vector figure general pipeline proposed methodology framework two streams obtaining motion appearance features top stream extracts motion bottom extracts appearance features following feature extraction description phases order obtain feature vector becoming independent variables number frames video number interest points encoding step required encoding methods like bag visual words bovw fisher kernel used till experiments illustrate fisher kernel accurate former one however recently pot encoding method proposed ryoo could reach results case videos modern approaches mostly based deep learning convolutional neural networks cnns could succeed giving best results image recognition image segmentation forth although one problem case video domain networks designed input images address problem researches conducted case point karpathy introduced four models cnns models time dimension incorporated different channels network zisserman proposed two stream cnn model classify videos however method suffers problem number stacked optical flow frames given cnn limited due problem overfitting network furthermore better estimation motions video convolutional neural network devised convolution pooling layers operate depth time dimension convolution small number due vast amount convolution calculations hence capture motion dynamics longer ones would lost using network recent work used stacked obtain tractable dimensionality case videos also another work designed deep fusion framework aid lstms representative temporal information extracted could reach results three widely used datasets iii proposed method section introduce proposed method pipeline generally videos either consist two different aspects appearance motion appearance related detecting recognizing existing objects frame motion following time hence motion information highly correlated temporal dimension depicted fig order capture two mentioned aspects video proposed framework two streams independent upper stream extracting motion bottom appearance features following motion feature extraction explained detail firstly images optical flow consecutive frames calculated helps estimate motion nearby frames video however estimating motion strictly challenging open area research idea keeping track motion elements vary time estimate longer changes therefore optical flow images described specific set features pursued time dimension found best way utilizing networks already trained large scale image datasets training network highly time consuming process needed also strong representation would obtained since networks could reach results image recognition aligning representation sequential frames beside leads obtaining set time series various ways extract features time series pot representation plus novel pooling operator chosen due several prominent reasons firstly thanks temporal filters time series break assists represent activity lower levels furthermore pot benefited extracting different features time series represent different aspects data resulted time series especially coming first person videos full sophisticated specific feature max represent result pot framework beneficial extracting motion features time series pot representation method extracts different features max sum histogram time series gradient time series final feature vector representation designed motion features framework concatenation features together time series add variance another pooling operator pooling set demonstrate feature also extracts useful information definition pooling operator time domain value ith time series time max sum pooling operators defined histograms time series gradient pooling operators defined proposed variance new pooling operator follows figure sample frames different classes two different datasets left dogcentric right vectors expected represent complementary information last step svm trained final feature vector experimental results conducted experiments two public datasets dogcentric fig represents sample frames type contains different human activities playing basketball volleyball consists human activity videos dataset camera usually located specific place large amounts movement evaluate method performed leave one cross validation loocv dataset original work dogcentric activity dataset camera located back dogs thus large amounts makes highly challenging class activities consisting activities dogs walking drinking well interacting humans total number videos dataset like previous methods half total number videos per class used training half used testing classes odd number clips number table comparison encoding methods dogcentric dataset per class final classification accuracy activity class besides order concentrate better resulted time series temporal pyramid filters applied series hence resultant time series whole time domain whole time domain parts forth motion features extracted explained applying pooling operators level resulted time series exploiting temporal pyramid filters hand appearance features substantial role classifying videos pipeline used middle frame video feed network obtain appearance feature vector final representation video acquired concatenating motion appearance feature ball play car drink feed turn head left turn head right pet body shake sniff walk final accuracy bovw method accuracy ifv pot proposed test instances one number training performed algorithm times different permutation train test sets mean classification accuracy reported table clear proposed method achieved significant improvement terms classification accuracy compare two traditional representation methods bag visual words bovw improved fisher vector ifv addition proposed method could outperform baseline pot method classes dogcentric dataset also final accuracy obtaining optical flow images consecutive frames used popular method horn schunck convert colorful images order fed networks flow visualization code baker followed implementation method applied frames video sample existing frames googlenet network utilized describe either optical flow images motion stream middle frame appearance stream feasible omitting softmax layers table comparison classification accuracy dogcentric dataset according temporal pyramid levels number temporal pyramids classification accuracy table comparison results approaches dataset method accuracy hasan liu dense trajectories soft attention cho snippets two stream lstm two stream lstm proposed method linear svm two stream lstm two stream lstm proposed method svm network googlenet neurons layer furthermore case number temporal pyramids different experiments conducted results illustrated table seen increasing number temporal pyramids levels classification accuracy improved increasing five levels decreased compare four levels believe phenomenon due fact increasing temporal pyramid levels number dimensionality would increase dramatically hand exist enough training data learning classifier reason increasing number temporal pyramids always improve performance system proposed method also evaluated video dataset number temporal pyramids used dataset sampling frames performed comparison method results dataset reported table seen proposed method svm could reach best results dataset experiments svm classifier linear kernel used later one showed better performances conclusion paper new approach video classification proposed capability employing two different categories first videos motion changes calculated extracting discriminant features motion time series following pot representation method novel pooling operator final feature vector resulted concatenating two complementary feature vectors appearance motion perform classification evaluating proposed method two different types datasets comparing obtained results state art concluded proposed method works perfectly groups also increases accuracy references liu luo shah recognizing realistic actions videos wild computer vision pattern recognition cvpr ieee conference iwashita takamine kurazume ryoo animal activity recognition egocentric videos pattern recognition icpr international conference ryoo rothrock matthies pooled motion features videos proceedings ieee conference computer vision pattern recognition liu shao zheng realistic action recognition via gaussian processes pattern recognition vol uijlings duta rostamzadeh sebe realtime video classification using dense proceedings international conference multimedia retrieval laptev interest points international journal computer vision vol dalal triggs histograms oriented gradients human detection ieee computer society conference computer vision pattern recognition dalal triggs schmid human detection using oriented histograms flow appearance european conference computer vision wang schmid action recognition improved trajectories proceedings ieee international conference computer vision klaser schmid descriptor based bmvc british machine vision conference scovanner ali shah sift descriptor application action recognition proceedings acm international conference multimedia csurka dance fan willamowski bray visual categorization bags keypoints workshop statistical learning computer vision eccv csurka perronnin fisher vectors beyond image representations international conference computer vision imaging computer graphics karpathy toderici shetty leung sukthankar video classification convolutional neural networks proceedings ieee conference computer vision pattern recognition simonyan zisserman convolutional networks action recognition videos advances neural information processing systems tran bourdev fergus torresani paluri generic features video analysis corr vol wang gao song zhen sebe shen deep appearance motion learning egocentric activity recognition neurocomputing vol gammulle denman sridharan fookes two stream lstm deep fusion framework human action recognition applications computer vision wacv ieee winter conference horn schunck determining optical flow artificial intelligence vol baker scharstein lewis roth black szeliski database evaluation methodology optical flow international journal computer vision vol hasan incremental activity modeling recognition streaming videos proceedings ieee conference computer vision pattern recognition sclaroff object scene actions combining multiple features human action recognition computer wang schmid liu action recognition dense trajectories computer vision pattern recognition cvpr ieee conference sharma kiros salakhutdinov action recognition using visual attention arxiv preprint cho lee chang robust action recognition using local motion group sparsity pattern recognition vol hausknecht vijayanarasimhan vinyals monga toderici beyond short snippets deep networks video classification proceedings ieee conference computer vision pattern recognition | 1 |
attacks uas networkschallenges open research problems vahid behzadan feb dept computer science engineering university nevada reno usa vbehzadan critical missions unmanned aerial vehicles uav bound widen grounds adversarial intentions cyber domain potentially ranging disruption command control links capture use airborne nodes kinetic attacks ensuring security electronic communications systems paramount importance safe reliable integration military civilian airspaces past decade active field research produced many notable studies novel proposals attacks mitigation techniques uav networks yet generic modeling networks typical manets isolated systems left various vulnerabilities investigative focus research community paper aims emphasize critical challenges securing uav networks attacks targeting vulnerabilities specific systems aspects index security vulnerabilities ntroduction century scene rapid revolution civilization approach interactions advancement communication technologies combined unprecedentedly increasing trust interest autonomy pushing mankind evolutionary jump towards delegation challenging tasks agents mars rovers search rescue robots witnessed trend overcoming limitations inherent replacement personnel systems capable performing tasks risky repetitive physically difficult simply economically infeasible human actors unmanned aerial vehicles uavs notable examples revolution since early military intelligence theaters seen explosive growth deployment tactical uavs surveillance transport combat operations meantime civilian use uavs gained traction manufacturing operations costs small uavs undergoing steady decline cheaper cost uavs also led growing interest collaborative deployment multiple uavs perform specific tasks monitoring conditions farms patrolling national borders yet multitude challenges associated vision solving crucial safe reliable employment systems civilian military scenarios one challenge ensuring security systems comprise uavs remote operational conditions leave burden command control reliant onboard gnss telemetry satellite relay mobile ground unit link satellite link atg link ground control station fig communication links uas network components body literature issue seen accelerated growth recent years partly due major cyber attacks uavs overwhelming number potential vulnerabilities uavs indicates need vigorous standards frameworks assurance reliability resilience malicious manipulations aspects uavs mechanical components information processing units communications systems operations links necessary exchange situational operational commands basis essential functions formation control task optimization architecture uav networks current consensus research community biased towards decentralized hoc solutions allow dynamic deployment unmanned aerial systems uas minimal time financial expenditure preparations structure typical uas network shown figure considering various types links interfaces depicted figure deduced networks inherently complex nature integration multiple subsystems aggregates individual vulnerabilities may result new ones rooted interactions subsystems hence uas present research community novel interdisciplinary challenge aim paper emphasize critical vulnerabilities specific network communications aspects uavs provide research community list open problems ensuring safety security growing technology niqueness uas etworks accurate analysis vulnerabilities uas networks necessitates understanding airborne network differs traditional computer networks much recent studies area compare uas networks mobile hoc networks manets wireless sensor networks wsn uas communications protocols may initially seem similar generic distributed mobile networks yet differences mobility mechanical degrees freedom well operational conditions build grounds separate classification uas networks one distinguishing factor velocity airborne vehicles may range several hundreds miles per hour high mobility airborne platforms increases complexity requirements communications subsystem many aspects uas network link layer management links adaptation access control fast enough accommodate tasks neighbor discovery resource allocation extremely dynamic environment likewise network layer must able provide fast route discovery path calculation preserving reliability information flow physical layer communications kinetic aspects uas give rise unique requirements span uas network may vary clusters far sparse distributions transmission power uav radios must adjustable efficient power consumption sustained communications also since geography environment mission may vary rapidly channel availability uas links subject change potential solution uas equipped dynamic spectrum access dsa adaptive radios provide required agility furthermore conventional antenna arrangement airborne platforms changes orientation attitude aircraft affect gain onboard radios problem intensified unmanned aircraft elimination risk human pilot allows longer unconventional maneuvers considerations clarify demand fresh vantage point analyzing problem security uas networks reliability today uavs need studied models adopt inclusive view systems impact seemingly benign deficiencies overall vulnerability uavs iii natomy uav uavs systems meaning operations reliant interaction physical computational elements system consequently security uav dependent computation communications elements protocols also physical components system heavy entanglement traditionally independent components requires thorough framework analysis security issues uavs inclusive entire airframe one obstacle developing framework variety uav architectures capabilities makes design generic model iff antenna satcom antenna nav optical sensors multiband data link radar antenna antenna uhf antenna fig sensing communication components uav difficult yet similarity fundamental requirements systems allows generation high level system model conventional types uavs figure depicts breakdown components conventional uav uavs contain multiple communication antennas including air ground atg air air ata satellite data link navigation antennas along set sensors positioning navigation uav typically consisted global navigation satellite system gnss receiver accurate positioning inertial measurement unit imu relative positioning based readings kinetic sensors subsystem extended include air traffic monitors collision avoidance systems inside fuselage one processors supervise operation navigation uav using output various radios sensors adjustment electronic mechanical parameters process performed adaptive control mechanisms many dependent feedback loops elements mentioned section may become subject malicious exploitation leading uav undesirable states critical malfunctions overview otential attacks table lists uninvestigated attacks uas networks categorized according network functionalities factors table emphasizes criticality security problem potential vulnerability exists every major component ranging outer fuselage antennas network layers application stack section provides overview attacks listed table presents preliminary ideas potential mitigating approaches areas research sensors navigation absence human pilot airframe uavs puts burden observing environment set sensors onboard aircraft whether autonomous remotely piloted sensors eyes ears flight controller provide environmental measurements necessary safe successful completion mission however malicious exploitation sensors critical systems widely neglected vulnerability assessment table attacks uas networks component attacks sensors visual navigation spoofing physical layer adaptive radios deceptive attacks spectrum sensing jamming antennas disruption deception direction arrival estimator beamnullinduced jamming orientation induction defensive maneuvers link layer topology inference topological vulnerability formation adaptive jamming routing attacks network layer traffic analysis disruption convergence air traffic control spoofing induced collisions fault handling manipulation fault detection systems attacker may manipulate misuse sensory input functions trigger transfer malware misguide processes dependent sensors simply disable cause denial service attacks trigger undesired failsafe mechanisms navigational measurements gnss imu units traditionally used tandem provide accurate positioning aircraft gnss signals gps highly susceptible spoofing attacks report demonstrates uavs rely commercial gps receivers positioning vulnerable relatively simple jamming spoofing attacks may lead crash capture uav adversaries since establishment gps various countermeasures gnss spoofing proposed ranging exploitation direction polarization received gps signal attack detection beamforming statistical signal processing methods elimination spoofing signals however speed spatial freedom uavs render many basic assumptions criteria techniques inapplicable authors propose variations imu gps readings detection spoofing attacks anomalies fused measurements theoretically attractive practical deployment technique requires highly reliable imus adaptive threshold control efficient performance economically undesirable small uavs industry practical limitations accuracy implementation leave detection technique ineffective advanced spoofing attacks demonstrating insufficiency current civilian gnss technology applications fusion imu gnss systems sensors video camera may lessen possibility spoofing yet navigation also subject attacks simplest blinding camera saturating receptive sensors high intensity laser beams sophisticated attack may aim deception visual navigation system smaller areas homogenizing periodically modifying texture terrain beneath uav may cause miscalculations movement orientation investigating effect attacks control loop fused positioning system may determine feasibility attacks potential mitigation techniques detection attacks navigation subsystem basis reactive countermeasures triggering hovering mechanisms however following section demonstrates mechanisms also potential subjects malicious manipulation robustness sensory navigational subsystem spoofing attacks may improved implementation proactive mechanisms elimination spoofing signals applicability uavs yet investigated fault handling mechanisms even stringent reliability requirements uavs mechanical electronic subsystems uavs remain prone faults due physical damage unpredicted state transitions therefore critical uav systems must consider possibility faults implement fault handling mechanisms reduce impact events system typical examples fault handling mechanisms entering hovering pattern temporary faults occur persistent faults event fatal faults capture crash remotely operated systems fault handling mechanisms may triggered automatically certain fault detected process adds yet another attack surface uas networks fault detection mechanisms may subject manipulation instance temporary disruption communications triggers hovering pattern uav adversary jam link bind motion aircraft thus simplify kinetic destruction physical capture severe case sensory manipulation allows induction capture conditions tactical uav thereby triggering autodestruction mechanism air traffic control atc collision avoidance integration unmanned vehicles national international airspaces requires guarantees safety reliability uav operations one major consideration safety physical layer typical uavs require multiple radio interfaces retain continuous connectivity essential links satellite relays ground control stations uavs degree complexity along physical mechanical characteristics uavs widen scope potential vulnerabilities enable multiple attacks specific uas networks section presents discussion attacks physical layer uav nodes adaptive radios operational environment uas detect tcas advisory descend tcas advisory climb detect altitude airborne operations situation awareness collision avoidance modern manned aircraft major civilian airspaces equipped secondary surveillance technologies automatic dependent surveillance broadcast allow aircraft monitor air traffic vicinity information along available means traffic monitoring provide situation awareness traffic advisory collision avoidance system tcas monitors risk collision aircraft generates advisories prevent collisions growing interest deployment uavs implementation similar technologies uas crucial recent literature contain several proposals tcas atc solutions uavs many based adaptation commercial tcas protocols security point view approach suffers several critical vulnerabilities rendering unfeasible missioncritical uas applications firstly insecure protocol design lack authentication unencrypted broadcast nature protocol make room relatively simple attacks ranging eavesdropping manipulation air traffic data jamming injection false data consequently tcas system relying produce erroneous results advisories leading unwanted changes flight path worst scenario collisions also tcas shown susceptible flaw known collisions common implementations tcas equipped prediction capabilities foresee effect advisory produce dense traffic conditions certain scenarios may cause tcas generate advisories lead state avoidance collision possible hence adversary capable manipulating traffic data intentionally orchestrate conditions leading collisions authors provide example flaw airplane scenario illustrated figure scenario initially collision path hence tcas generates collision avoidance advisory descend climb respectively lower altitude situation holds causing climb puts collision path even though tcas fail generate new correction advisories uavs advisory longer practical enough time collision implement new path tcas advisory descend detect detect tcas advisory climb fig example collision airplane scenario networks highly dynamic sustained reliable communications necessitates employment radios capable adjusting changes propagation links conditions depending operational requirements adaptability may apply physical layer parameters transmit power frequency modulation configuration antennas procedure responsible controlling parameters must essentially rely environmental inputs manipulated adversaries result undesirable configurations issue analogous deceptive attacks spectrum sensing process cognitive radio networks various mitigation techniques proposed based anomaly detection fusion distributed measurements however rapid variation conditions uas network may lead situations determination baseline anomaly detection practical consideration also develops necessity rapid adjustments limits acceptable amounts redundancy overhead similarly deployment airborne nodes hostile environments reduces feasibility relying collaboration distributed sensors therefore countermeasures sufficient agile uas radios novel solutions must tailored according unique requirements airborne networks antennas current trend antenna selection uav radios favored towards omnidirectional antennas defined relatively homogeneous reception transmission directions horizontal vertical planes feature simplifies communications mobile nodes homogeneity gain eliminates need considering direction transmissions hand indiscriminate nature omnidirectional antennas extends attack surface eavesdroppers jammers since also need tune towards exact direction radios implement attacks countermeasure class attack utilization directional antennas communicate certain directions blind others besides higher security advantages directional antennas include longer transmission ranges spatial reuse thus providing higher network capacity one downside associated approach inevitable escalation overhead maintaining directional communications highly mobile networks complex costly task requires knowledge nodes positions well employment antennas capable reconfiguring beam patterns overcome disadvantages two approaches midway solution combining simplicity omnidirectional radios spatial selectivity directional antennas actualized form beamforming antenna arrays antennas capable detecting direction arrival doa individual signals measurement along system parameters used electronically reconfigure radiation pattern directionality antenna array beamforming studied mitigation technique jamming attacks allows spatial filtering jammer signals adjusting antenna pattern null placed towards direction jammer accuracy efficiency technique depends correct detection jamming signal well resolution beamformer doa estimations adversary may attack doa estimator shaping jamming signals mimic waveforms nearby legitimate node thus avoiding detection causing false detections another attack scenario exploits process beamnulling hoc uas network beamnulling must implemented distributed fashion allow targeted nodes retain regain connectivity network independently due lack coordination nulls created one node towards jammer may also null direction legitimate signals depending mobility model formation network adversary may deploy multiple mobile jammers strategically controlled trajectories manipulate doa measurements eventually cause network null legitimate links necessary certain conditions adversary maximize efficiency jamming attacks persistently manipulating distributed beamnulling mechanism way solution converges towards maximally disconnected state analytical studies feasibility criteria attack may produce insights possible countermeasures mitigation techniques orientation depicted figure conventional uav employs multiple fixed antennas different sides dedicated certain application consider atg antenna placed lower side uav discussed previously uav performs maneuver ascends steep climb angle atg antenna longer capable communicating ground antenna therefore atg link lost issue exploited jamming uas networks employ spatial retreat mitigation technique observing reaction nodes jamming attacks adversary may infer reformation strategy adapt attack defensive reformation certain nodes leads loss links due new orientation antennas link layer formation similar generic multihop wireless networks topology uas network determined based location uavs relative uavs closer threshold directly communicate farther must utilize relay nodes reach destination knowledge topology network allows adversaries optimize attacks analyzing structure target determine vulnerable regions identifying nodes whose disconnection incur maximum loss connectivity network even though effect topology resilience network widely studied proposed mitigation techniques fail provide practical solutions uas networks class solutions based security obscurity approach suggesting employment covert communications nodes hide topology network adversaries besides undesirable overhead approach terms decreased network throughput increased processing costs shown topology networks estimated high degree accuracy via timing analysis attacks therefore hiding topology may serve reliable solution mission critical scenarios alternative mitigation technique adaptive control topology approach detection jamming attack triggers reformation process nodes uas network change positions retain connectivity fundamental assumption approach ability nodes detect localize attacks may always practical promising area investigation problem minimizing topological vulnerability targeted jamming attacks development distributed formation control techniques consider optimization problem may lead highly efficient techniques ensuring dynamic resilience uas networks mitigation technique topology inference attacks randomization transmission delays expected introducing randomness forwarding delays weakens observed correlation connected hops therefore reduces accuracy timing analysis attacks however high mobility uas networks consequent requirement minimal latency limit maximum amount delay permissible networks constraint limits randomness forwarding delays may neutralize effect mitigation technique potential alternative delay randomization transmission decoy signals perturb adversary correlation analysis proposal may extended incorporating topology control resultant formation optimized decoy transmissions way spatial distribution traffic network appears homogeneous outside observer thereby inducing artificial correlation nodes network extent authors knowledge feasibility overhead optimal implementation approach yet analytically experimentally studied network layer impact high mobility uas networks greatly accentuated network layer speed frequency changes topology uas network give rise many challenges still active subjects research yet studies security routing mechanisms tend follow tradition equating uas networks manets indeed unique features unmanned airborne networks generate set challenges network layer match criteria conventional manets highly dynamic nature uas networks well stringent requirements latency necessitate novel routing mechanisms capable calculating paths rapidly changing topologies survey state art area presented proposed methods may prone potential vulnerabilities demand detailed technical analysis comparison proposals terms security yet fulfilled similar link layer routing layer uas networks also vulnerable traffic analysis attacks aiming infer individual flows well pairs connections various mitigation techniques attacks proposed many rely traditional approaches mixing decoy transmissions techniques require addition redundancies overhead uas networks comprehensive feasibility analysis optimal design corresponding defense strategies vital yet available research community mobile routing uas networks surface attacks convergence network discussed topology unmanned airborne networks subject manipulation adversarial actions exploitation adaptive formation control jamming attacks also many recently proposed routing mechanisms airborne networks rely global knowledge geographical positions every node network may also prone manipulation sophisticated adversary may able design strategic combination topological perturbation sensor manipulations prevent slow convergence routing network investigation attack terms feasibility well potential countermeasures may prove valuable efficient protection uas networks operating hostile environments onclusions nature uavs demand extension scope ordinary vulnerability analysis systems addition threats electronic computational components largely overlooked class vulnerabilities fostered interactions mechanical elements computational subsystems pondering list critical attacks presented paper alarming conclusion drawn serious threats still remain unmitigated every networking component uas communications also interdependency network components including sensors physical elements uavs considering seriousness open issues aspects uavs successful move towards age mainstream unmanned aviation envisioned without remedying void effective solutions critical challenges eferences kim wampler goppert hwang aldridge cyber attack vulnerabilities analysis unmanned aerial vehicles infotech aerospace javaid sun devabhaktuni alam cyber security threat analysis modeling unmanned aerial vehicle system homeland security hst ieee conference technologies ieee banerjee venkatasubramanian mukherjee gupta ensuring safety security sustainability systems proceedings ieee vol subramanian beyah sensory channel threats cyber physical systems call communications network security cns ieee conference ieee wesson humphreys hacking drones scientific american vol broumandan nielsen lachapelle gps vulnerability spoofing threats review antispoofing techniques international journal navigation observation vol humphreys ledvina psiaki ohanlon kintner assessing spoofing threat development portable gps civilian spoofer proceedings ion gnss international technical meeting satellite division vol hartmann steup vulnerability uavs cyber attacksan approach risk assessment cyber conflict cycon international conference ieee tang causal models analysis collisions phd thesis universitat barcelona bhattacharjee sengupta chatterjee vulnerabilities cognitive radio networks survey computer communications vol bhunia behzadan regis sengupta performance adaptive beam nulling multihop hoc networks jamming ieee international symposium cyberspace safety security css new york behzadan sengupta inference topological structure vulnerabilities adaptive jamming tactical hoc networks review elsevier journal computer system sciences zhu distributed formation control via online adaptation decision control european control conference ieee conference ieee bekmezci sahingoz temel flying networks fanets survey hoc networks vol kong hong gerla routing scheme anonymity threats mobile hoc networks mobile computing ieee transactions vol | 3 |
construction linear codes two ziling henga qin yuea department mathematics nanjing university aeronautics astronautics nanjing china state key laboratory cryptology box beijing china state key laboratory information security institute information engineering chinese academy sciences beijing china abstract jul linear codes weights important coding theory attracted lot attention paper present construction linear codes trace norm functions finite fields weight distributions linear codes determined cases based gauss sums interesting construction produce optimal almost optimal codes furthermore show codes used construct secret sharing schemes interesting access structures strongly regular graphs new parameters keywords linear codes secret sharing schemes strongly regular graphs gauss sums msc introduction let denote finite field elements linear code subspace fnq minimum hamming distance code called optimal code exists let denote number codewords hamming weight code length weight enumerator defined sequence called weight distribution code said number nonzero sequence equals weight distribution interesting topic investigated many papers particular survey cyclic codes weight distributions provided weight distribution gives minimum distance error correcting capability code addition contains important information computation probability error detection correction respect error detection correction algorithms recently ding proposed effective construction linear codes follows let power linear code length defined xdn denotes trace function set called defining set set well chosen code may good parameters using paper supported foundation science technology information assurance laboratory email addresses zilingheng ziling heng yueqin qin yue preprint submitted journal latex templates july construction selecting proper defining sets many good codes found let function construction equivalently written let positive integers gcd let trqmi trace function fqmi let nqm norm function fqm fqmi fqm nqm qmi qmi paper present construction linear code xnqm xnqm defining set given nqm since norm function nqm surjective exists element nqm nqm nqm nqm implies need consider remark construction generalization authors determined lower bound minimum hamming distance gave weight distributions respectively purpose paper determine weight distribution defined equation cases main mathematical tools used paper gauss sums consequently obtain four classes linear codes flexible parameters examples given show codes optimal almost optimal applications codes used construct secret sharing schemes interesting access structures strongly regular graphs new parameters following notations used paper canonical additive characters respectively generators multiplicative character groups respectively gauss sums respectively primitive element gcd gcd gauss sums section recall basic results gauss sums important tools paper let finite field elements power prime canonical additive character defined follows denotes primitive root complex unity trace function orthogonal property additive characters see given otherwise let multiplicative character trivial multiplicative character defined known isomorphic orthogonal property multiplicative characters form multiplication group multiplicative character see given otherwise gauss sum defined easy see gauss sums viewed fourier coefficients fourier expansion restriction terms multiplicative characters paper gauss sum important tool compute exponential sums general explicit determination gauss sums difficult problem cases gauss sums explicitly determined following state gauss sums case lemma case gauss sums let multiplicative character order assume exists least positive integer mod let integer gauss sums order given furthermore gauss sums given even otherwise odd quadratic gauss sums following lemma theorem suppose quadratic multiplicative character odd prime mod mod exponential sums section investigate two exponential sums used calculate weight distribution let canonical additive character let canonical additive character fqmi respectively denote ybx ybx firstly begin compute exponential sum lemma let positive integers gcd gcd let qmi proof let implies using fourier expansion additive characters see equation since obtain ord therefore otherwise assume let equivalent mod implies mod therefore mod mod known gcd gcd mod mod denote substituting equation hence assume hence since gcd denote mod mod otherwise since let proof completed remark fourier expansion additive characters used lemma effective technique computing exponential sums also employed determine weight distribution cyclic codes yue lemma know value distribution determined gauss sums known following mainly consider special cases give value distribution lemma let notations hypothesises lemma value distribution following proof lemma times times following discuss value distribution exponential sum respectively assume clear assume hence note ord ord give value distribution several cases even lemma let mod otherwise hence value distribution times times odd mod mod due gcd since even lemma mod mod mod one see implies hence value distribution times times odd odd mod mod due gcd case value distribution obtained similar way omit details value distribution given times times note value distribution represented unified form proof completed lemma let notations hypothesises lemma value distribution given follows times times proof since lemma clear even odd hence lemma times times value distribution given gauss sums order unknown general however easily obtain value distributions cubic quartic gauss sums known omit details following begin investigate exponential sum lemma let positive integers denote gcd let qmi proof let implies using fourier expansion additive characters see equation let qmi proof lemma know otherwise hence assume hence implies otherwise since gcd system mod mod otherwise mod mod equivalent mod denote mod value distribution given follows lemma let notations lemma value distribution given follows times times proof proof similar lemma omit details weight distribution section give weight distribution defined equation special cases griesmer bound linear codes following lemma griesmer bound code denotes smallest integer larger equal case following determine weight distribution denote nqm since norm function nqm epimorphism two multiplicative groups trace function epimorphism two additive groups ker nqm ker note hence always assume section denote nqm bnqm basic facts additive characters bnqm nqm ybx ybx ybx ybx note norm function nqm epimorphism hence similarly ybx ybx ybx discussions obtain weight codeword bnqm bnqm equals equations hence lemma parameters optimal linear code respect griesmer bound however linear code new equivalent concatenated version simplex code weight distribution given following theorem let positive integers denote gcd let linear code defined equation linear code parameters weight enumerator given table table weight distribution code theorem weight frequency proof weight distributions obtained lemma equation easy verify dimension equals example let theorem optimal linear code according griesmer bound weight enumerator theorem almost optimal linear code according griesmer bound weight enumerator example let theorem linear code weight enumerator given case following determine weight distribution denote nqm clear ker nqm ker denote nqm bnqm basic facts additive characters bnqm nqm ybx ybx ybx ybx note section ybx discussions obtain weight codeword bnqm bnqm equals equations hence lemma parameters optimal linear code respect griesmer bound new mentioned weight distribution given following theorem let positive integers gcd gcd let linear code defined equation linear code parameters weight enumerator given table table weight distribution code theorem weight frequency proof weight distributions obtained lemma equation note dimension equals example let theorem almost optimal linear code according griesmer bound weight enumerator theorem nearly optimal linear code corresponding optimal linear codes parameters example let theorem linear code weight enumerator given theorem let positive integers gcd gcd let linear code defined equation linear code parameters weight distribution given table iii table iii weight distribution code theorem weight frequency proof proof completed lemma equation example let theorem linear code weight enumerator given dual code parameters shortened linear codes observed weights code theorems common divisor indicates code may punctured shorter one assume note implies hence defining set equation expressed every pair distinct elements obtain shortened linear code theorem directly obtain following result corollary let positive integers denote gcd let linear code defining set given equation weight linear code parameters enumerator given table table weight distribution code corollary weight frequency example let corollary optimal linear code according griesmer bound weight enumerator dual parameters optimal according applications section apply linear codes construct secret sharing schemes strongly regular graphs denote dual code code secret sharing schemes linear codes secret sharing schemes introduced shamir blakley first time secret sharing schemes used banking systems cryptographic protocols electronic voting systems control nuclear weapons shown linear code employed construct secret sharing schemes order describe secret sharing scheme linear code see need introduce covering problem linear codes support vector fnq defined codeword covers codeword support contains minimal codeword linear code nonzero codeword cover nonzero codeword covering problem linear code determine minimal codewords theorem know secret sharing scheme interesting access structure derived provided nonzero codeword linear code minimal weights linear code close enough nonzero codewords minimal described follows lemma let wmin wmax denote minimum maximum nonzero hamming weights linear code respectively wmin every nonzero codeword minimal codes theorem corollary wmin wmax mod wmin wmax mod code theorem wmin wmax mod wmin wmax mod code theorem wmin wmax discussions linear codes obtained paper used construct secret sharing schemes interesting access structures using framework strongly regular graphs linear codes connected graph vertices called strongly regular graph parameters regular valency number vertices joined two given vertices according two given vertices adjacent theory strongly regular graphs introduced bose first time code said projective minimum distance dual code least following lemma gives connection projective linear codes strongly regular graphs lemma projective linear code two nonzero weights equivalent strongly regular graph following parameters due lemma new projective linear codes yield new strongly regular graphs examples section show codes always projective particular find two classes projective codes following lemma let notations theorem linear code theorem projective linear code weight enumerator proof weight enumerator directly obtained theorem prove projective let denote numbers codewords hamming weight respectively denote first three pless power moments see note solving system hence minimum distance least proof completed lemma let notations corollary linear code corollary projective linear code weight enumerator proof weight enumerator directly obtained corollary prove projective let denote numbers codewords hamming weight respectively denote first three pless power moments see note solving system hence minimum distance least proof completed lemmas yield following theorem theorem let gcd exists strongly regular graph following parameters following theorem directly obtained lemmas theorem let exists strongly regular graph following parameters remark parameters strongly regular graphs theorems probably new comparing known ones literature concluding remarks paper presented construction linear codes determined weight distributions cases based gauss sums four classes linear codes obtained note linear codes flexible parameters probably new comparing known linear codes literature see known linear codes interesting construction produce optimal almost optimal codes codes used construct secret sharing schemes interesting access structures strongly regular graphs acknowledgments authors grateful reviewers editor valuable comments improved quality paper special thanks one reviewers pointing knowledge linear codes references references ashikhmin barg minimal vectors linear codes ieee trans inf theory anderson ding helleseth build robust shared control systems des codes cryptogr blakley safeguarding cryptographic keys proc nat comput conf bose strongly regular graphs partial geometries partially balanced designs pacific math baumert mceliece weights irreducible cyclic codes inf contr berndt evans williams gauss jacobi sums wiley sons company new york calderbank kantor geometry codes bull london math soc carlet ding yuan linear codes perfect nonlinear mappings secret sharing schemes ieee trans inf theory ding linear codes ieee trans inf theory ding wang coding theory construction new systematic authentication codes theoretical computer science ding zhou cyclic codes weight distributions discrete math ding ding class codes applications secret sharing ieee trans inf theory clerk delanote codes partial geometries steiner systems des codes cryptogr delsarte weights linear codes strongly regular normed spaces discrete math grassl bounds parameters various types codes avaliable http heng yue class binary linear codes three weights ieee commun letters heng yue two classes linear codes finite fields appli heng yue evaluation hamming weights class linear codes based gauss sums des codes cryptogr huffman pless fundamentals codes cambridge cambridge univ press codes error detection singapore world scientific yue class cyclic codes two distinct finite fields finite fields appli yue hamming weights duals cyclic codes two zeros ieee trans inform theory lidl niederreiter finite fields cambridge univ press cambridge macwilliams sloane theory error correcting codes amsterdam netherlands shamir share secret commun assoc comp mach cao two classes bent functions linear codes three four weights cryptogr commun yuan ding secret sharing schemes three classes linear codes ieee trans inf theory yang yao complete weight enumerators family linear codes des codes cryptogr zhou fan linear codes two three weights quadratic bent functions des codes cryptogr zhou ding class cyclic codes finite fields appli | 7 |
efficient counting method colored triad census jeffrey lienerta laura koehlya felix christopher steven marcuma feb national human genome research institute national institutes health business school university oxford corresponding author abstract triad census important approach understand local structure network science providing comprehensive assessments observed relational configurations triples actors network however researchers often interested combinations relational categorical nodal attributes case desirable account label color nodes triad census paper describe efficient algorithm constructing colored triad census based part existing methods classic triad census evaluate performance algorithm using empirical simulated data undirected directed graphs results simulation demonstrate proposed algorithm reduces computational time approximately approach also apply colored triad census zachary karate club network dataset simultaneously show efficiency algorithm way conduct statistical test census forming null distribution realizations conditioned graph comparing observed colored triad counts expected demonstrate method utility discussion results homophily heterophily bridging simultaneously gained via colored triad census sum proposed algorithm colored triad census brings novel utility social network analysis efficient package keywords triad census labeled graphs simulation introduction triad census important approach towards understanding local network structure first presented isomorphism classes structurally unique triads preprint submitted xxx february possible directed network conduct triad census one simply counts occurrence structures without respect labeling nodes use node label color characteristic attribute interchangeably useful insofar specific triads combinations thereof may relate underlying social processes giving rise observed network example bridges triads one null dyad two dyads may important navigating social networks certain triads may less favorable based structural balance theory balanced see figure moreover variant triad census motif analysis investigates statistics various triad configurations motifs found wide application biology also important network structure nodal characteristics relate tie formation dissolution subject research homophily individuals similar attributes connected however homophily observed phenomenon process processes giving rise homophily varied often confound relationship networks outcomes difficult tease apart methodological advances stochastic models disentangle effects extent analyses attempted disentangle processes leading homophily structural processes triadic closure additionally coloring nodes network important question many graph theorists indeed represents major topic field although nodal characteristics triad census important rarely examined fully conjunction yet cases specific colored triads studied example study brokerage based triad structure group membership simultaneously approach used study brokerage dynamic networks well study examined specific colored triads based generational membership within families work authors showed ties observed different quantities expected based underlying null model none past research evaluated full census colored triads rather researchers focused instead specific colored triads priori expected relevant processes hand result foundational works exhaustive respect alternatives words previous research examining subset colored triads likely amount false negatives due examining every colored triad could addressed censusing colored triads examination node characteristics together local structure important provides opportunity simultaneously study occurrence triadic structure nodal attributes interactions instance certain colored triads may impermissible strict heterosexuals sexual contact networks impermissible triads would categorized observed due chance triad census potentially missing important social processes constraints play type network incorporating node coloring triad census pattern fully elucidated based methodological gap literature develop method census colored triads binary network arbitrary number colors due large numbers unique isomorphism classes number colors increases method requires computational efficiency addition mathematical accuracy well one often interested forming null distribution compare observed colored triad counts null distribution analytically solved one would likely census colored triads many simulated networks increasing need algorithm computationally efficient current efficient methods triad census exploit sparseness networks scale number edges increases time run algorithm faster number edges squared however methods exploit network sparseness inferring number null triads work colored case explicitly interrogate every triad variations within null triads due coloring therefore extend methodology based matrix algebra interrogates every triad method scales number nodes paper presents colored triad census computational complexity shows approach used large networks tested nodes colors relatively efficient time uses method many times create null distributions colored triad censuses form basis conditional uniform graph tests illustrate benefits analysis incorporating colored triad census using dataset zachary karate club algorithm since original appearance triad census number papers explored compute triad census network efficient manner although methods exist calculating triad census use quadratic algorithm presented efficient methods avoid interrogating null triads directly taking advantage sparseness graphs subsequent large number null triads known number total triads instead interrogate triads least one edge subtract count total number triads network arrive number null triads insufficient colored triad census null triads number algebraically determined moody algorithm employ limiting shortcut therefore use basis colored triad census algorithm additionally many networks sparse leverage computational techniques increasing efficiency sparse matrix operations reducing computational complexity method showed count triad isomorphism classes could derived using matrix algebra adjacency matrix graph derivatives review let adjacency matrix network aij tie exists node node let symmetrized matrix formed making edge reciprocal via eij max aij aji complement formed subtracting complete network adjacency matrix eij neither tie tie next mutual matrix made removing asymmetric edges mij mji aij aji finally matrix asymmetric edges calculated therefore cij aij aji based matrices moody demonstrates calculate number isomorphism classes case unlabeled graphs equivalently graph consisting nodes single color generally done multiplying either multiplication three matrices corresponding relevant edges triad interest two triads directly amenable process calculated via addition subtraction triad types respectively extend work case multiple colors introduce matrices respectively focal color matrix matrix transpose matrix matrix calculated evaluating color nodes rows indexing nodes focal color composed following way function returning color node matrix transpose matrix algorithm works using matrices evaluate switch edges nodes focal colors ends tails edges adjacency matrix network adapt triad census nomenclature appending colors name triad colors ordered top node proceeding clockwise figure arbitrarily adapted orientation triads triad census figure computational reasons orientation important triads orientation may longer isomorphic color introduced figure makes possible count unambiguously name unique colored triads therefore triad consisting symmetric dyad null dyads top node color node color node color distinct triad coloring nodes identical previous triad following general formula arbitrary triad arbitrary coloring triplet refers multiplication trace function arbitrary triad color triplet function returning matrix specific type edge nodes triad example triad first edge top node going clockwise symmetric edge node one node two figure case would matrix symmetric matrix sandwiching color matrices would turn proper edges nodes one two specified colors edge asymmetric one direction edge figure used instead force edge proper direction point redundant triads due certain colored triads isomorphic instance isomorphic would removed checking isomorphisms based matrix row column permutations triad two colored matrices identical row column permutations isomorphic one removed arbitrarily decide discard triad whose coloring triplet name comes second alphanumerically noted removing way computationally expensive particularly number colors nodes grows large therefore shorten process performing colors storing unique isomorphism classes leaves unique isomorphism classes colored triads accessed linear time number unique isomophism classes given number colors shown ismorphism classes triad census classes separate four types colored triads depending many structurallydistinct positions triad two ends edge triad one another distinct node edges calculation number isomorphism class arbitrary number colors shown table combinatoric term row together respective leading permutation coefficients counts number colored triads three two one unique color respectively example network three colors classes one accessible permutation three colors present triad six ways two colors one way one color triad isomorphism classes colored table expression number isomorphism classes within triad class number colors numbers summed isomorphism classes total number colored isomorphism classes triads colors returned similarly done undirected triads solely summing triads observed undirected case table reports total number colored triads undirected directed networks range clearly number isomorphism classes grows quite quickly increases algorithm implemented package publicly available linked paper via github https algorithmic performance theoretically basic matrix multiplication used algorithm runs computational complexity scales number nodes squared matrix multiplication involved algorithm scaling number colors cubed comes number distinct colored triads number colors number directed colored triads number undirected colored triads table number colored triad isomorphism classes directed undirected networks ranging algorithm needs evaluate taking advantage methods matrix multiplication using sparse matrices appropriate due sparse nature social networks complexity reduced something closer log test efficiency algorithm apply networks ranging size number colors ranging holding average density constant creating graphs parameters runtime algorithm parameters seen figure general increasing results constant increases log runtime expect based theoretical computational complexity expected also observe super linear increase log runtime increases although super linear still well linear curve would exist used matrix multiplication optimized sparse matrices finally observe changes decreases runtime going nodes also due computational time involved initializing sparse matrices storing operating sparse matrices unexpected perfectly optimized therefore algorithm would use standard matrix multiplication small networks switch sparse methods larger networks however gains would minimal generally seconds would require additional logical steps check network size minimizing gain therefore use sparse matrix methods network sizes empirical use example show empirical value algorithm use zachary karate club social network historical network describes social relationships members university karate club ties exist members overlapped least one eight contexts representing undirected relations relations varied terms likely strength association likely weak end spectrum enrolled class university likely strong end studio additionally three ties specific activities instructor member factions identified node attribute taking one five mutually exclusive values strongly associated president weakly associated president neutral weakly associated instructor strongly associated parttime instructor labeled respectively labels placed ordinal scale quantify members direction strength alignment undirected network five colors represents case rich number colored triads detailed conclusions drawn using proposed algorithm general undirected directed networks initially ran colored triad census social network using faction nodal attribute gave empirical observed colored triad census determine whether triads observed less often expected chance construct null model choice null model important ramifications null distribution triads chose model edge formation function probability ties nodes specific attributes null model conditioned uniform random graph distribution based probabilities edges nodes particular color combinations matrix comprises empirical probabilities ties groups diagonal representing tie probabilities means significantly colored triads observed due network effects beyond homophily heterophily networks generated matrix via bernoulli random graph process null model therefore conditions graph size distribution node factions probability ties within factions generating networks null model observe whether colored triad counts deviate expected based marginal distribution faction mixing condition parameters observe statistical deviations colored triad census indicates structure network dependent parameters conditioned moreover triad expected number variance calculated assuming tie follows binomial distribution reasonable assumption binary social network data observed number compared numerical results extracted exact binomial test equates following probability expectation variance example colored triad aij aij aij probability equation based three colors involved triad standard approach continues assume edges graph independent expected value specific triad multiply probability single one triads total number colored triplets exist graph equation expectation triad returns number unique colors number nodes color graph also take nodes one two three time depending many times color repeats represented expectation therefore follows binomial distribution variance follows accordingly equation however show method also works null distributions analytically solvable construct null distribution based simulated draws null model number trials increases simulated null distribution colored triad census asymptotically approach analytical solution shown trials draw random networks null distribution run triad census networks comparing observed count null distribution allows get approximate conditional uniform graph test test colored triad turn results results figure heatmap approximate associated binomial exact test null triad clustered triad colored triplet returned proposed algorithm use clustering algorithm group color triplets similar profiles across types triads assists identifying trends across different colored triads leading conclusions would likely missed colored triads individually examined find particular importance three branch cutpoints clustering algorithm color triplets first branch clustering algorithm figure separates four color triplets comprising colored triads pattern triads triads results show color triplets less clustered expected chance color triplets contain nodes two factions first two nodes strongly aligned instructor indicates aligned likely form ties one another members factions exception group two nodes likely form tie one members member even case complete triad still observed less expected chance particular result perhaps unsurprising since members close alignment aligning president therefore given tendency towards homophily likely overlap though less strongly members faction hence figure isomorphism classes triads orientation used respect color numbering colors added triads labeled starting top node proceeding clockwise algorithmic runtime log running time colors colors colors colors colors colors colors colors log nodes figure runtime algorithm networks ranging size nodes orders magnitude one ten colors runtimes generated using running windows intel ghz chip ram count value color key histogram szszs zsn zszszs zsn zsh zszw zsn szw szw zszs zszw szw zszw zszs zsh zszw zsh zsh zszs szs szs zsh szw zsh szs szs szs szw szw szw zsh zwzwn zwzwzw zwzwhw zwnn zwnhw zwhwhw nnn nnhw nhwhw hwhwhw nzwzw nzwn nzwhw nzwzs hwzwzw hwzwn hwzwhw hwzwzs hwnn hwnhw hwnzs zwnzs zwzwzs zwhwzs nnzs nhwzs hwhwzs szw szw figure heatmap colored triads corresponding often observed empirical networks relative null distribution columns separate triads based man configuration rows separate triads based triplet colors standard clustering algorithms used create dendrograms white space indicates redundant isomorphism classes gray boxes either triads observed network networks null distribution therefore undefined pseudo pseudo three labels correspond three breakpoints clustering separate meaningful groups group four color triplets exhibiting homophily nodes group colored triplets exhibiting low clustering heterogeneous nodes group colored triplets show potential significant amounts bridging second branching point clustering figure separates group color triplets triad triads observed much expected triads triplets question nodes different factions first second position edge triad first second node triplet figure means triplets first edge less likely expected chance lack formation first edge subsequently hampers formation edge second third nodes triplet triad first two nodes triplets often two factions least distance two away indicating members faction likely overlap members disparate faction put another way pattern triads shows lack faction heterophily third branch point unlabeled primarily singling group color triplets observed network draw conclusions prevalence fourth branch point figure however distinguishes group five triplets triad triad means edge first two nodes less likely expected chance edge occur second edge occurs often expected chance triplets begin member triad case effectively bridging tie another interestingly bridging node anything primarily consigned role branch discussed third node another member four five triplets indicates members karate club often overlap members factions provided second person also often overlapped another although examples show homophily bridging analyzing full colored triad census allows draw conclusions looking colored triads particular homophily mostly story nodes bridging primarily nodes triad factions comprising three nodes faction observed often expected chance cases different implications results nodes homophily strengthened nodes often overlap members factions also strongly overlap one another may partially artifact types overlap stated three overlap activities involve direct participation instructor studio corresponding groups president means may opportunity overlap one another due solely structure data hand triplet members also triad although triads seem indicate bridging members figure given members also densely connected practical effect potential bridging ties reduced observing joint effect homophily bridging ties possible complete colored triad census neither standard triad census brokerage analysis would revealed intricacies results sum clear results colored triad census allows one examine multiple trends simultaneously often done isolated analyses including homophily heterophily brokerage importantly also allows generalizations based clustering various triads color triplets well specific results based individual triads manner colored triad census yield results multiple structural levels simultaneously examining local structure nodal attributes net alternatives involving mixtures node coloring triadic configurations limitations limitations method first computationally efficient relative existing methods including brute force counting networks nodes take day run using proposed algorithm colored triad census however easily paralellizable process partitioning separate algebraic steps example real time necessary run analysis greatly reduced taking advantage feature time needed parallelized colored triad census approximately inversely proportional number computational cores used calculation plus overhead second complete explication colored triads benefits potential pitfalls examining triads simultaneously eliminates possibility missing interesting results specific colored triad excluded however sheer number colored triads means making complete sense results difficult due information overload even results carefully examined colored triads conceivable one might miss important result colored triads directed network matter meticulous examiner eye however use standard clustering algorithms heatmaps may help ease interpretation results coarse general groups triads individual colored triads perspective conclusions paper extended matrix algebra methods calculate colored triad census network directed undirected arbitrary number colors relatively computationally efficient manner shown number mathematical results regarding colored triad census including generalized equation arbitrary colored triad number isomorphism classes arbitrary numbers colors expectation variances colored triads analyzed empirical social network using algorithm calculated approximate colored triad based analytic exact binomial test less complex null distributions approximately simulation complex null distributions also shown type conclusions drawn results observing results would feasible many currently available methods one additional benefit method directly used counting tool sufficient statistics network inference models exponential random graphs ergm colored triad census essentially allows one simultaneously evaluate effect local structure node attribute network structure ergm building previous work researchers explicated ergms capacity including triad census believe colored triad census useful technique efficient implementation social networks research showing continued importance triad census even era stochastic models complex networks acknowledgements references appendix variable functional definitions variable function notation description variable function adjacency matrix symmetrized adjacency matrix complement symmetrized adjacency matrix adjacency matrix including mutual ties adjacency matrix including asymmetric ties coloring matrix color function returning color node function returning matrix edge triad nodes arbitrary colored triad man configuration colored triplet function returning number unique colors given colored triad function returning number times color appears colored triad probability observing triad expectation triad binomial model variance triad binomial model table list variables constants functions defined manuscript | 8 |
compressive sampling ensembles correlated signals ali ahmed justin dec draft december abstract propose several sampling architectures efficient acquisition ensemble correlated signals show without prior knowledge correlation structure architectures different sets assumptions acquire ensemble rate prior sampling analog signals diversified using simple implementable components diversification achieved injecting types structured randomness ensemble result subsampled reconstruction ensemble modeled matrix observed undetermined set linear equations main results show matrix recovered using convex program total number samples order intrinsic degree freedom ensemble heavily correlated ensemble fewer samples needed motivate study discuss ensembles arise context array processing introduction paper considers exact reconstruction correlated signals samples collected subnyquist rate propose several implementable architectures derive sampling theorem relates bandwidth priori unknown correlation structure sufficient sampling rate successful signal reconstruction consider ensembles signals output sensors bandlimited frequencies see figure entire ensemble acquired taking uniformly spaced samples per second channel leading combined sampling rate show signals correlated meaning ensemble written closely approximated distinct linear combinations latent signals net sampling rate reduced approximately using coded acquisition sampling architectures propose blind correlation structure signals structure discovered signals reconstructed architecture involves different type analog diversification ensures signals sufficiently spread point sample captures information ensemble ultimately measured actual samples individual signals rather different linear combinations combine multiple signals capture information interval time later show samples expressed linear measurements matrix course one second aim acquire matrix comprised samples ensemble taken nyquist rate proposed sampling architecture produces series linear combinations entries matrix conditions matrix effectively recovered set linear measurements object intense study recent literature mathematical contributions paper show conditions met systems clear implementation potential school electrical computer engineering georgia tech atlanta georgia email alikhan jrom work supported nsf grant onr grant grant packard foundation draft ahmed romberg december motivation studying architectures comes classical problems array signal processing applications one narrowband signals measured multiple sensors different spatial locations narrowband signals significant bandwidth modulated high carrier frequency making heavily spatially correlated arrive array correlation review detail section systematically exploited spatial filtering beamforming interference removal estimation multiple source separation activities depend estimates correlation matrix rank matrix typically related number sources present compressive sampling used array processing past sparse regularization used direction arrival estimation long sampling theorems started make theoretical guarantees concrete results along recent works including show exploiting structure array response free space narrowband signals consists samples superposition small number sinusoids used either doa estimate reduce number array elements required locate certain number sources single sample associated sensor acquisition complexity scales number array elements paper exploit structure different way goal completely reconstruct timevarying signals array elements structure imposed ensemble general spatial spectral sparsity previous work ask signals correlated priori unknown manner ensemble sampling theorems remain applicable even array response depends position source complicated way moreover reconstruction algorithms indifferent spatial array response actually long narrowband signals remain sufficiently correlated paper organized follows sections describe signal model motivation problems array processing section introduce components corresponding mathematical models use sampling architectures section present sampling architectures show measurements taken correspond generalized measurements matrix state relevant sampling theorems numerical simulations illustrating theoretical results presented section finally section section provide derivation theoretical results notation use upper lower case bold letters matrices vectors respectively scalars represented upper lower case letters notation denotes row vector formed taking hermitian transpose column vector linear operators sets represented using script letters use denote set notation denotes set denotes matrix ones diagonal positions indexed zeros elsewhere given two matrices denote matrix vec vec vec vec column vectors formed stretching columns respectively denotes transpose use usual kronecker product use denote vector ones lastly operator refers expectation operator represents probability measure signal model signal model illustrated figure denote signal ensemble set individual signals conceptually may think matrix finite number rows row containing bandlimited signal underlying assumption every signal ensemble approximated linear combination underlying independent signals smaller ensemble write draft ahmed romberg december figure model ensemble signals correlated meaning signals closely approximated linear combination underlying signals write signals tall matrix capturing correlation structure multiplied ensemble latent signals matrix texpoint fonts used emf samples inherits structure ensemble read texpoint manual delete box matrix entries use convention fixed matrices operating left signal ensembles simply mix signals equivalent structure impose individual signals bandlimited keep mathematics clean take signals periodic however results extended signals discussed shortly begin natural way discretize problem exists know signal captured exactly samples bandlimited periodic signal ensemble written complex symmetric ensure real capture perfectly taking equally spaced samples per row call matrix samples knowing every entry matrix knowing entire signal ensemble write matrix whose rows contain fourier series coefficients signals normalized discrete fourier matrix entries observe hence inherit correlation structure ensemble moving observe impose dimensional subspace structure rank min take underlying independent signals known sinusoids frequencies however interested pertinent challenging case case underlying independent signals known advance main contribution paper leverage unknown correlation structure reduce sampling rate lastly interest readability technical results assume without loss generality bandwidth signals greater number signals correlated signal model considered compressive sampling multiplexed signals two multiplexing architectures proposed sampling theorem proved dictated minimum number samples exact recovery signal ensemble paper presents sampling architectures use separate adc channel rigorously prove adcs operate roughly draft ahmed romberg december optimal sampling rate guarantee signal recovery types correlated signal models exploited previously achieve gains sampling rate example shows two signals related sparse convolution kernel reconstructed jointly reduced sampling rate signal model considers multiple signals residing fixed subspace spanned subset basis functions known basis shows sampling rate successfully recover signals scales number basis functions used construction signals paper also show sampling rate scales number independent latent signals without knowledge basis applied treatment results similar flavor refer reader shown later observe signal ensemble limited set random projections signal recovery achieved nuclear norm minimization program related work considers case given random projections signal find subspace belongs solving series programs extension signals end section noting many ways problem might discretized using fourier series convenient two ways easily tie together notion signal bandlimited limited support fourier space sampling operators representations fourier space make straightforward analyze practice however recovery technique extended signals windowing input representing finite interval using one number basis expansions low rank structure preserved linear representation also possible interested performing ensemble recovery multiple time frames would like recovery transition smoothly frames might consider windowed fourier series representations lapped orthogonal transform carefully designed basis functions tapered sinusoids get something close bandlimited signals truncating representation certain depth remain orthonormal also possible adjust recovery techniques allow measurements span consecutive frames yielding another natural way tie reconstructions together framework similar sparse recovery described detail applications array signal processing one application area ensembles signals play central role array processing narrowband signals section briefly review ensembles arise central idea sampling wavefront multiple locations space well time leads redundancies exploited spatial processing concepts general common applications diverse surveillance radars underwater acoustic source localization imaging seismic exploration wireless communications essential scenario multiple signals emitted different locations signals occupies bandwidth size modulated carrier frequency signals observed receivers array rough approximation complex multiples one another close approximation observed signals lie subspace dimension close one subspace determined location source redundancy observations array elements precisely causes ensemble signals low rank rank ensemble determined number emitters conceptual departure discussion previous sections see emitter may responsible subspace spanned number latent signals greater one still small array large number appropriately spaced elements advantageous even relatively small number emitters present observing multiple delayed versions signal draft ahmed romberg december allows perform spatial processing beamform enhance null emitters certain angles separate signals coming different emitters resolution perform spatial processing depends number elements array spacing main results paper give guarantees well spatial processing tasks performed rather say correlation structure makes tasks possible used lower net sampling rate time entire signal ensemble reconstructed reduced set samples spatial processing follow discuss detail low rank ensembles come simplicity discussion center linear arrays free space need signal ensemble lie low dimensional subspace need know subspace may beforehand essential aspects model extend general array geometries channel responses channels suppose signal incident array plane wave angle array element observes different shift signal denote seen array center origin figure element distance center sees sin signal consists single complex sinusoid delays translated different complex linear multiples signal sin case signal ensemble write steering vector complex weights given decomposition signal ensemble makes clear spatial information coded array observations instance standard techniques estimating direction arrival involve forming spatial correlation matrix averaging time rxx column space rxx correlate steering vector every direction see one comes closest matching principal eigenvector rxx ensemble remains low rank emitter small amount bandwidth relative larger carrier frequency take bandlimited closely correlated one another standard scenario array elements uniformly spaced along line make statement precise using classical results spectral concentration case steering vectors equivalent integer spaced samples signal whose fourier transform bandlimited frequencies sin bandwidth less thus dimension subspace spanned within good approximation figure illustrates particular example plot shows normalized eigenvalues matrix raa fixed values ghz mhz equals speed light eigenvalues within factor largest one fair say rank signal ensemble small constant times number narrow band emitters using complex numbers make discussion smoothly real part signal ensemble rank cos sin term draft ahmed romberg december kth largest eigenvalue sin figure plane wave impinges linear array free space wave pure tone time responses element simply phase shifts one another eigenvalues raa scale normalized largest eigenvalue defined electromagnetic signal bandwidth mhz carrier frequency ghz array elements spaced half apart even signal appreciable bandwidth signals array elements heavily correlated effective dimension case architectural components addition converters proposed architectures use three standard components analog multipliers modulators linear filters signal ensemble passed devices result sampled using converter adc taking either uniformly spaced samples samples final outputs acquisition architectures avmm lti filter adc figure analog multiplier avmm takes random linear combinations input signals produce output signals action avmm thought left multiplication random matrix ensemble intuitively operation amounts distributing energy ensemble equally across channels modulators multiply signal analog random binary waveform disperses energy fourier transform signal random lti filters randomize phase information fourier transform given signal convolving analog distributes energy time finally adcs convert analog stream information discrete form use uniform sampling devices architectures analog multiplier avmm produces output signal ensemble input signal ensemble matrix whose elements fixed since matrix operates pointwise ensemble signals sampling output applying matrix draft ahmed romberg december samples sampling commutes application recently avmm blocks built hundreds inputs outputs bandwidths megahertz use avmm block ensure energy disperses less evenly throughout channels random orthogonal transform highly probable signal contain amount energy regardless energy distributed among signals formalized lemma allowing deploy equal sampling resources channel ensuring resources quiet channels wasted second component proposed architecture modulators simply take single signal multiply fixed known signal take binary waveform constant time intervals certain length waveform alternates nyquist sampling rate take samples write vector samples containing samples diagonal matrix whose entries samples choose binary sequence randomly generates amounts random matrix following form probability independent conceptually modulator disperses information entire band allows acquire information smaller rate filtering shown section compressive sampling architectures based random modulator analyzed previously literature principal finding input signal spectrally sparse meaning total size support fourier transform small percentage entire band modulator followed filter adc takes samples rate comparable size active band architecture implemented hardware multiple applications third type component use preprocess signal ensemble linear lti filter takes input convolves fixed known impulse response assume complete control even though brushes aside admittedly important implementation questions periodic bandlimited write action lti filter circular matrix operating samples first row consists samples vector samples signal obtained output filter make repeated use fact diagonalized discrete fourier transform normalized discrete fourier matrix entries diagonal matrix whose entries vector scaled version fourier series coefficients generate impulse response use random sequence fourier domain particular take draft ahmed romberg december prob uniform symmetry constraints imposed hence conceptually convolution disperses signal time maintaining fixed energy note orthonormal matrix convolution random pulse followed also analyzed compressed sensing literature random filter created fourier domain following filter adc samples random locations produces universally efficient compressive sampling architecture number samples need recover signal active terms unknown locations fixed basis scales linearly logarithmically main results sampling architectures main contribution paper design theoretical analysis sampling architecture section enables acquisition correlated signals state sampling theorem claims exact reconstruction signal ensemble using much fewer samples compared dictated sampling theorem proof theorem involves construction dual certificate via golfing scheme show minimization recovers signal ensemble theorem also independent interest matrix recovery result form measurement ensemble begin straightforward architecture section minimizes sample rate correlation structure known combine components last section specific way create architectures provably effective different assumptions signal ensemble main sampling architecture section uses random modulators prior adcs architecture effective energy ensemble approximately uniformly dispersed across time moreover expect signal energy dispersed across array elements avmm upfront mix signals section present variation architecture ensembles required dispersed priori instead ensemble preprocessed lti filters avmm ensure dispersion energy across time array elements fixed projections known correlation structure mixing matrix ensemble known straightforward way exists sample ensemble efficiently let singular value decomposition matrix orthogonal columns diagonal matrix orthogonal columns efficient way whiten ensemble sample resulting signals rate scheme shown figure written multiplication matrix matrix containing nyquist samples signals respectively rows discretized signal ensemble simply knowing correlation structure ensemble hence using sinc interpolation samples recovered using samples observe optimal sampling rate scales linearly many interesting applications correlation structure ensemble known time acquisition paper design sampling strategies blind correlation structure able achieve signal reconstruction near optimal sampling rate nonetheless introducing avmms filters draft ahmed romberg december adc avmm adc figure known correlation structure optimal sampling strategy whiten ensemble sample sample resultant signal rate total samples per second optimal actual number degrees freedom underlying independent signals bandlimited modulators intuitively randomness introduced components disperses limited information correlated ensemble across time array elements resultantly adcs collect generalized samples turn enable reconstruction algorithm operate successfully regime architecture random sampling correlated signals architecture presented section shown figure consists one sampling nus adc per channel adc takes samples randomly selected locations locations chosen independently channel channel time interval nus adc takes input signal returns samples average sampling rate channel collectively nus adcs return random samples input signal ensemble uniform grid nus adc nus adc nus adc figure signals recorded sensors sampled separately independent random sampling adcs samples uniform grid average rate samples per second sampling scheme takes average total samples per second equivalent observing entries matrix samples random sampling model equivalent observing randomly chosen entries matrix samples defined problem exactly problem given randomly chosen entries matrix enable fill missing entries incoherence assumptions matrix since svd draft ahmed romberg december coherence defined max max max brevity sometime drop dependence interest readability assume without loss generality rest write bandwidth signal larger least equal number result noiseless case asserts solution minimization maps randomly chosen entries exactly equals high probability result indicates sampling rate scales within log factors number independent signals rather total number signals ensemble measurements contaminated additive measurement noise result suggest solution modified minimization satisfies constant depends coherence defined discussed number samples matrix completion scale linearly coherence parameter quantifies distribution energy across entries small matrices even distribution energy among entries see details signal reconstruction application investigation means successful recovery smaller sampling rate would suffice signals across time array elements one avoid dispersion requirement preprocessing signals avmm filters adopt strategy construction main sampling architecture paper architecture random modulator correlated signals efficiently acquire correlated signal ensemble architecture shown figure follows approach first step avmm takes input produce output signals meaning output signals inputs take signals output replicas input signals without amounts mixing matrix normalization ensures take general random orthogonal next sampling architecture second step output signals undergo analog preprocessing involves modulation filtering modulator takes input signal multiplies fixed known take binary waveform constant interval length intuitively modulation results diversification signal information frequency band width diversified analog signals processed filter implemented using integrator see details resultant signals acquired using uniformly spaced samples per second sampling theorem show later suffices take ratio number output input reasonably small however suggested simulations seems always enough believe merely technical requirement arising due proof method draft ahmed romberg december main sampling result theorem shows exact signal reconstruction achieved regime particular roughly require factor nyquist rate intuitively acquisition possible signals diversified across frequency using random demodulators therefore every sample provides generalized global information lti low pass lti low pass adc adc lti low pass adc figure architecture randomly modulated sampling correlated signals replicated times produce output signals amounts choosing mixing matrix practice suffices signals preprocessed analog using bank modulators filters resultant signal sampled uniformly adc channel operating rate samples per second net sampling rate samples per second system model section measured samples linear measurements unknown matrix show signal reconstruction samples regime corresponds recovering approximately matrix set linear equations input signal ensemble mixed using avmm produce ensemble signals let denote individual signals output avmm since mixing linear operation every signal ensemble bandlimited case therefore dft coefficients mixed signals simply signal output avmm multiplied corresponding binary sequence alternating rate binary sequences generated randomly independently output modulation nth channel modulated outputs filtered using integrator integrates interval width result sampled rate using adc sample acquired adc nth channel integration operation commutes modulation process hence equivalently integrate signals interval width treat samples ensemble initial development section may resemble noted compared signal structure exploited correlations among signals sparsity leads completely different development towards end section draft ahmed romberg december entries matrix defined bracketed term representing entries matrix filter defined denote diagonal matrix containing along diagonal important note invertible vanish view clear clf aclf clf inherits structure since already carried integration intervals length action modulator followed integration simply reduces randomly independently flipping every entry adding consecutive entries given row produce value sample acquired adc mathematically write concisely defining vector supported index set size simplicity factor support set entries vector independent binary random variables zeros moreover assume rows notations place concisely write sample nth branch shows samples taken adc sampling architecture figure linear measurements underlying matrix defined rank exceed recalling section constitutes number linearly independent signals ensemble objective recover linear measurements amounts reconstructing rate sampling matrix recovery define linear map length vector containing linear measurements entries formally mainly interested scenario linear map determined number measurements much smaller number unknowns therefore uniquely determine true solution solve penalized optimization program argmin subject nuclear norm sum singular values nuclear norm penalty encourages solution low rank concrete performance guarantees linear map obeys certain properties case noisy measurements slight modification result argument factor details see draft ahmed romberg december bounded noise solve following quadratically constrained convex optimization program argmin subject optimization program also provably effective see example suitable sampling theorem exact stable recovery unknown matrix assume reduced form svd matrices left right singular vectors respectively diagonal matrix singular values define coherences max max max max diagonal matrix containing ones diagonal positions indexed may sometime work notations drop dependence clear context easily verified similar manner one show see notice using fact upper bound also follows finally similar techniques also show one attach meaning values coherences context sampling application consideration example smallest value achieved energy roughly equally distributed among columns indexed context sampling problem means energy signal ensemble dispersed equally across time similarly coherence quantifies spread signal energy across array elements measures dispersion energy across time array elements let define max ready state main result dictates minimum sampling rate adcs needs operated guarantee reconstruction signal ensemble theorem correlated signal ensemble acquired using sampling architecture figure operating adcs rate universal constant depends fixed parameter addition ratio number output input signals avmm must satisfy log numerical constant exact signal reconstruction achieved probability least solving minimization program result indicates well spread correlated signals acquired operating adc figure rate times within log factors moreover also require number output signals avmm larger number input signals log factor however believe merely artifact proof technique experiments also corroborate successful recovery always obtained satisfying even draft ahmed romberg december also note result theorem assumes without loss generality case sufficient sampling rate acc obtained replacing another important observation sampling rate scales linearly coherence implying sampling architecture effective correlated signals concentrated across time remedy shortcoming preprocessing step using random filters mixing avmm added ensure signals across time array elements stable recovery realistic scenario measurements almost always contaminated noise compactly expressed using vector equality case noise bounded following template proof shown conn obeys ditions theorem solution high probability details see similar stability result theorem upper bound suboptimal factor min theory improve suboptimal result show effectiveness nuclear norm penalty analyzing different estimator argmin estimator proposed theoretically shown obey essentially optimal stable minimizer recovery results using fact simple soft thresholding singular values matrix one show estimate max addition left right singular vectors matrix respectively corresponding singular value comparison estimator matrix lasso use knowledge known distribution instead minimizes empirical risk knowing distribution fact holds case replace expected value empirical risk obtain estimator completing square although klt estimator easier analyze shown give optimal stable recovery results theory empirically perform well matrix lasso quantify strength noise vector norm random vector define inf scaler random variables simply take definition norm finite entries subgaussain proportional variance entries gaussian assume entries noise vector obey following result order draft ahmed romberg december theorem fix given measurements contaminated additive noise obeys statistics solution max probability least whenever universal constant depending roughly speaking stable recovery theorem states nuclear norm penalized estimators stable presence additive measurement noise results theorem derived assuming random statistics contrast stable recovery results compressed sensing literature assume noise bounded noise vector introduced earlier give brief comparison theorem stable recovery results compare result follows results improve upon results factor also compare stable recovery results stable recovery results derived result roughly states linear operator satisfies matrix rip solution obeys result essentially optimal stable recovery result comparison result also optimal however prove different estimator statistical bound noise term addition also donot require matrix rip generally required prove optimal results form architecture uniform sampling architecture discussion section result theorem suggest sampling rate sufficient exact recovery using architecture scales linearly coherence parameter respectively discussed earlier coherence parameters quantify energy dispersion correlated signal ensemble across time array elements ideally would like sampling rate scale factor independent signal characteristics coherences achieve signals preprocessed random filters avmm signal energy evenly distributed across time array elements resultant signals randomly modulated filtered sampled uniformly rate modified sampling architectures depicted figure nus adc nus adc nus adc figure architecture analog multiplier avmm takes random linear combinations input signals produce output signals equalizes energy across channels random lti filters convolve signals diverse waveform results dispersion signals across time resultant signals sampled locations selected randomly uniform grid average rate using sampling nus adc channel draft ahmed romberg december lti low pass adc lti adc low pass lti low pass adc figure architecture random lti filters disperse signal across time analog multiplier avmm takes random linear combinations input signals produce output signals amounts choosing mixing matrix dense randomorthogonal matrix well dispersed signals across time array elements randomly modulated filtered sampled rate recall random lti filters pass convolve signals diverse impulse response disperses signal energy time see lemma use random lti filter channel action random convolution signal ensemble modeled right multiplication circulant random orthogonal matrix underlying avmm takes random linear combination input signals produce output signals equalizes signal energy across array elements regardless initial energy distribution discussed earlier action avmm left multiplication ensemble architecture avmm ensure mixing signals across array elements take mixing matrix random orthonormal matrix thus samples collected architecture subset entries defined architecture avmm modified random orthonormal matrix implies unlike samples architecture collects defined multiply matrix samples random orthogonal matrices left right multiplication results modifying singular vectors note matrix matrix samples either isometry rank respectively new left right draft ahmed romberg december singular vectors sense random orthogonal matrices hence incoherent following lemma shows incoherence matrix lemma fix matrices left right singular vectors respectively create random orthonormal matrices let coherences defined following conclusions log log max log log max holding probability exceeding proof lemma presented section light clear samples collected using architecture randomly selected subset entries using result sufficient sampling rate successful reconstruction signals becomes max log light clear samples collected using architecture observation combining bound lemma theorem immediately replaced provides following corollary dictates sampling rate sufficient exact recovery using uniform sampling architecture figure corollary fix correlated ensemble exactly reconstructed using optimization program probability least samples collected adc figure rate max log universal constant depending addition ratio number output input signals avmm must satisfy log sufficiently large constant numerical experiments section study performance proposed sampling architectures numerical experiments mainly show correlated ensemble acquired paying small factor top optimal sampling rate roughly show distributed nature sampling architecture figure showing increasing number adcs array elements sampling burden adc reduced net sampling rate shared evenly among adcs finally show reconstruction algorithm robust additive noise sampling performance experiments section generate unknown matrix synthetically multiplying tall fat gaussian matrices objective recover batch signals samples taken given window time using sampling architecture figure take experiments results hint draft ahmed romberg december theorem technical requirement due proof technique use following parameters evaluate performance sampling architecture oversampling factor oversampling factor ratio cumulative sampling rate inherent unknowns successful reconstruction declared relative error obeys relative error first experiment shows graph figure point marked black dot represents minimum sampling rate required successful reconstruction specific probability success point computed empirically averaging independent iterations blue line shows fit black dots clear plot reasonably large values sampling rate within small constant optimal rate sampling rate oversampling context application assumption described section graph figure shows fixed number sources sufficient sampling rate inversely proportional number receiver array elements black dot represents minimum sampling rate required successful reconstruction probability blue line fit marked points words figure illustrates relationship number adcs sampling rate fixed number sources importantly increase receiver array elements reduces sampling burden adcs number source rank number adcs figure performance sampling architecture experiments take ensemble signals bandlimited probability success computed iterations oversampling factor function number underlying independent signals blue line fit data points sampling rate versus number recieving antennas blue line fit data points stable recovery second set experiments study performance recovery algorithm measurements contaminated additive measurement noise generate noise using standard draft ahmed romberg december gaussian model select natural choice condition holds high probability experiments figure solve optimization program plot figure shows relationship ratio snr snr log realtive error relative error log fixed oversampling factor result shows relative error degrades gracefully decreasing snr figure plot depicts relative error function oversampling factor fixed snr relative error decrease increasing sampling rate relative error relative error snr oversampling figure recovery using matix lasso presence noise input ensemble simulated random demodulator consists signals bandlimited number latent independent signals snr versus relative error oversampling factor relative error function sampling rate snr fixed proof lemma start proof lemma taking random orthogonal matrix proof recall defined let denote standard basis vectors begin proof noting standard result see reads max max log probability least proving lemma prove intermediate result max kve max log draft ahmed romberg december standard basis vectors assuming even clear extend argument odd write cos cos sin sin equal probability uniform random variables independent fact fixed uniform random variables sign cos sign sin independent one another thus probability distribution diag entries iid random variables light replace fixed write zwk column apply following concentration inequality theorem let vector whose entries independent random variables let fixed matrix every kskf diag case apply theorem kqk thus kve using union bound max make probability less taking log follows prove write let kth column let fixed row index column index write entry mth row zwk tall orthonormal matrix let since iid random variables standard applications hoeffding inequality tells zwk kpm thus probability exceeding log draft ahmed romberg december taking maximum sides plugging bound shows max max log max log holds probability least equality follows fact proves first claim lemma similarly implies log log defined last equality follows fact finally evaluating maximum sides using bound shows log max log max max proves second claim lemma proof theorem preliminaries recall obtain measurements unknown matrix random measurement ensemble denote rows mixing matrix random binary support set zero elsewhere addition vectors independently generated every theorem avmm simply replicates without mixing copies input signals produce output signals amounts choosing construction kan also recall using definition linear map measurements compactly expressed moreover adjoint operator second equality result also useful visualize linear operator matrix form denotes tensor product general tensor product matrices given big matrix draft ahmed romberg december definition easy visualize let denote rows matrices respectively begin defining subspace associated decomposition given orthogonal projections onto orthogonal complement defined respectively proofs later repeatedly make use following calculation kpt hpt han kan kan observe kan kan leads finally also require bound operator norm linear map end note measurement matrices orthogonal every standard inner product han whenever directly implies following bound operator kak kan kpt last inequality used fact although much tighter bound achieved using results random matrix theory loose bound sufficient purposes sufficient condition uniqueness uniqueness minimizer guaranteed sufficient condition given proposition matrix unique minimizer range null kpt kpt kpt light proposition sufficient show range kpt every null kpt kpt kpt holds immediately shown follows addition arbitrary apt kpt apt kpt kpt last inequality obtained plugging kpt apt shown true appropriate choice probability least corollary combining last two inequalities gives result draft ahmed romberg december golfing scheme random modulator technical reasons work partial linear maps modified linear map define partitions index set every clearly take number partitions partial linear maps defined xdn using definition clear every corresponding adjoint operator maps vector matrix also useful make note following versions definition xdn second definition emphasizes fact linear map thought big matrix operates vectorized linear operators defined subsets write iterative construction dual certificate range take projecting onto subspace sides results define iteration takes equivalent form take candidate dual certificate rest section concerns showing obeys conditions let start showing kpt holds end note iterative construction following bound immediately follows kpt kkw lemma kpt every means cuts every iteration giving following bound frobenius norm final iterate using union bound bound holds probability least proves candidate dual certificate obeys first condition since implies number output channels multiplier figure must factor roughly log compared input channels log assume ensured worst case doubling draft ahmed romberg december however believe requirement merely artifact using golfing scheme proof strategy theorem practice simulations point number channels output avmm equal input channels iterative construction clear converge showing satisfies second condition begin kpt last equality follows fact since kpt second last inequality requires every using lemma true probability least factor comes union bound every lemma combining sample complexities using definition gives proof theorem key lemmas state key lemmas prove theorem lemma fix assume max universal constant depending linear operator obeys probability least proof lemma presented section corollary fix assume max universal constant depends linear operator defined obeys kpt apt probability least proof proof corollary follows exactly steps proof lemma difference take lemma define coherence iterates max max conditions probability least proof proof lemma follows similar techniques matrix bernstein inequality used lemma similar results found skip proof due space constraints draft ahmed romberg december using definition fact see invoking lemma every iteratively conclude probability least lemma fix take sufficiently large constant let fixed matrix defined probability least proof lemma presented section references fazel matrix rank minimization applications dissertation stanford university march recht fazel parrilo guaranteed solutions linear matrix equations via nuclear norm minimization siam review vol recht exact matrix completion via convex optimization found comput vol gross recovering matrices coefficients basis ieee trans inform theory vol gorodnitsky rao sparse signal reconstruction limited data using focuss minimum norm algorithm ieee trans sig vol fuchs multipath detection estimation ieee trans sig vol application global matched filter doa estimation uniform circular arrays ieee trans signal vol april romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee trans inform theory vol february kunis rauhut random sampling sparse trigonometric polynomials appl comp harmon analysis vol rudelson vershynin sparse reconstruction fourier gaussian measurements comm pure appl vol duarte baraniuk spectral compressive sensing appl comp harm analysis vol july tang bhaskar shah recht compressed sensing grid ieee trans inform theory vol draft ahmed romberg december towards mathematical theory comm pure appl vol june ali ahmed justin romberg compressive multiplexing correlated signals ieee trans inform theory vol hormati roy vetterli distributed sampling signals linked sparse filtering theory applications ieee trans sig vol baron duarte wakin sarvotham baraniuk distributed compressive sensing arxiv preprint mishali eldar dounaevsky shoshan xampling analog digital subnyquist rates iet circuits devices vol mishali eldar blind multiband signal reconstruction compressed sensing analog signals ieee trans sig vol mishali eldar elron xampling signal acquisition processing union subspaces ieee trans sig vol mantzel romberg compressed subspace matching continuum arxiv preprint malvar staelin lot transform coding without blocking effects ieee trans speech signal vol april asif romberg sparse recovery streaming signals using ieee trans sig vol schmidt multiple emitter location signal parameter estimation ieee trans antennas vol roy kailath signal parameters via rotational invariance techniques ieee trans speech signal vol slepian bandwidth proceedings ieee vol march prolate spheroidal wave functions fourier analysis uncertainty discete case bell systems tech journal vol schlottmann hasler highly dense low power programmable analog multiplier fpaa implementation ieee emerg sel topic circuits vol chawla bandyopadhyay srinivasan hasler currentmode programmable analog multiplier two decades linearity proc ieee conf custom integr tropp laska duarte romberg baraniuk beyond nyquist efficient sampling sparse bandlimited signals ieee trans inform theory vol laska kirilos duarte raghed baraniuk massoud theory implementation converter using random demodulation proc ieee int symp circuits yoo becker loh monge rate receiver cmos proc ieee radio freq integr circuits symp rfic draft ahmed romberg december yoo turnes nakamura becker sovero wakin grant romberg compressed sensing parameter extraction platform radar pulse signal acquisition submitted ieee emerg sel topics circuits february murray pouliquen andreou lauritzen design cmos data converter theory architecture implementation proc ieee annu conf inform sci syst ciss baltimore romberg compressive sensing random convolution siam imag vol haupt bajwa raz nowak toeplitz compressed sensing matrices applications sparse channel estimation ieee trans inform theory vol rauhut romberg tropp restricted isometries partial random circulant matrices appl comput harmonic vol tropp wakin duarte baron baraniuk random filters compressive sampling reconstruction proc ieee int conf speech signal process icassp toulouse france recht simpler approach matrix completion mach learn vol plan matrix completion noise proc ieee vol mohan fazel new restricted isometry results noisy recovery proc ieee int symp inform theory isit austin texas june ahmed recht romberg blind deconvolution using convex programming ieee trans inform theory vol koltchinskii lounici tsybakov penalization optimal rates noisy matrix completion ann vol fazel recht parrilo compressed sensing robust recovery low rank matrices proc ieee asilomar conf signals syst pacific grove laurent massart adaptive estimation quadratic functional model selection ann ledoux concentration measure phenomenon ams vol tropp tail bounds sums random matrices found comput vol eldar kutyniok compressed sensing theory applications press draft ahmed romberg december cambridge university appendix proof key lemmas proof key lemmas mainly relies using matrix bernstein inequality control operator norms sums random matrices matrix inequality use specialized version matrix inequality depends orlicz norms orlicz norm random matrix defined inf exp suppose constant following proposition holds proposition let iid random matrices dimensions satisfy suppose define max constant probability least max log log proof lemma start writing sum independent random matrices using obtain using fact expectation quantity evaluates quantity therefore expressed sum independent zero mean random matrices following form employ matrix bernstein inequality control operator norm sum proceed define operator maps hpt operator rank one therefore pthe operator norm kzn kpt ease notation use shorthand begin computing variance follows draft ahmed romberg december last inequality follows fact symmetric semidefinite matrices square matrices simply given kpt develop operator norm result simplified expression using kpt using definition bound using fact second term simplified last inequality follows form fact kpt since simple calculation reveals expectation diag diagonal matrix obtained setting entries zero denotes identity matrix ones diagonal positions indexed directly implies max last equality follows definition coherence plugging bound finally calculate orlicz norm last ingredient obtain bernstein bound first important see kzn kzn kzn kpt equality follows form fact operator using last equation max max max kpt max max moreover simple calculation using facts shows log log using together using log bernstein inequality proposition kpt max log conclude choosing ensures kpt proves lemma using fact draft ahmed romberg december proof lemma proof lemma start writing sum independent random matrices using follows recall random binary defined earlier expectation random quantity last two equalities follow fact bound operator norm light discussion expressed following sum independent zero mean random matrices shorthand define compute variance start used fact kdn since symmetric matrix together definition implies therefore kan max max max max inequalities follow using definition coherence second variance term skip similar step first term land directly kan draft ahmed romberg december last equality result one show fixed vector fact vector independent rademacher random variables locations indexed zero elsewhere following kxb holds equal zero elsewhere moreover diagonal matrix ones zero elsewhere using max max max last inequality use definition combined light maximum accounts variance last inequality follows assumption finally need compute upper bound orlicz norm random variable begin using similar simple facts kan kan kdn using standard calculations see example compute following finite bound norm random variable max max max max max max last inequality follows using directly gives max max moreover using loose bound variance easy see log log results plugged proposition obtain kap max log log log holds probability least recall lemma follows using bound choosing universal constant depends fixed parameter draft ahmed romberg december proof theorem first step proof following oracle inequality gives upper bound deviation true solution mean squared sense theorem oracle inequlaity suppose observe noisy measurements rank given fro scalar solution nuclear norm penalized estimator obeys min required bound spectral norm begin bounding first term using corollary lemma stated follows corollary let fixed matrix defined log log ckx max probability least proof proof corollary similar proof lemma main difference number partitions moreover place proof development replace obtain bound understandably similar lemma fix sufficiently large constant following bound log holds probability least using corollary lemma bound obtain probability least taking without loss generality universal constant depends fixed parameter allows choose application theorem proves theorem proof lemma proof lemma requires use matrix bernstein inequality required bound spectral norm sum start summands variables zero mean follows start computing variance max draft ahmed romberg december last inequality follows facts independent identically distributed implying similarly arguments lead kan combining using gives assume final quantity required orlicz norm simply kan log end using log bernstein bound max log log using fact log proves result draft ahmed romberg december | 7 |
white matter fiber segmentation using functional varifolds kuldeep pietro benjamin stanley olivier christian sep livia technologie montreal canada aramis inria paris sorbonne upmc univ paris inserm cnrs institut cerveau moelle icm boulevard paris france departments neurology neuroradiology paris france ltci lab images group paristech paris france montpellier france abstract extraction fibers dmri data typically produces large number fibers common group fibers bundles end many specialized distance measures mcp used fiber similarity however distance based approaches require correspondence focus geometry fibers recent publications highlighted using microstructure measures along fibers improves tractography analysis also many neurodegenerative diseases impacting white matter require study microstructure measures well white matter geometry motivated propose use novel computational model fibers called functional varifolds characterized metric considers geometry microstructure measure gfa along fiber pathway use cluster fibers dictionary learning sparse framework present preliminary analysis using hcp data introduction recent advances diffusion magnetic resonance imaging dmri analysis led development powerful techniques investigation white matter connectivity human brain measuring diffusion water molecules along white matter fibers dmri help identify connection pathways brain better understand neurological diseases related white matter since extraction fibers dmri data known tractography typically produces large number fibers common group fibers larger clusters called bundles clustering fibers also essential creation white matter atlases visualization statistical analysis microstructure measures along tracts fiber clustering methods use specialized distance measures mean closest points mcp distance however approaches require correspondence fibers consider fiber geometry another important aspect white matter characterization statistical analysis microstructure measures highlighted recent publications using microstructure measures along fibers improves tractographic analysis motivated propose use novel computational model fibers called functional varifolds characterized metric considers geometry microstructure measure generalized fractional anisotropy along fiber pathways motivation work comes fact integrity white matter important factor underlying many cognitive neurological disorders vivo tissue properties may vary along tract several reasons different populations axons enter exit tract disease strike local positions within tract hence understanding diffusion measures along fiber tract tract profile may reveal new insights white matter organization function disease obvious mean measures tract tract geometry alone recently many approaches proposed tract based morphometry perform statistical analysis microstructure measures along major tracts establishing fiber correspondences studies highlight importance microstructure measures approaches either consider geometry signal along tracts intuitive approach would consider microstructure signal clustering also however elusive due lack appropriate framework potential solution explore novel computational model fibers called functional varifolds generalization varifolds framework advantages using functional varifolds follows first functional varifolds model fiber geometry well signal along fibers also require pointwise correspondences fibers lastly fibers need orientation framework currents test impact new computational model fiber clustering task compare performance existing approaches task clustering method reformulate dictionary learning sparse coding based framework proposed choice framework driven ability describe entire fibers compact dictionary prototypes bundles encoded sparse combinations multiple dictionary prototypes alleviates need explicit representation bundle centroid may defined may represent actual object also sparse coding allows assigning single fibers multiple bundles thus providing soft clustering contributions paper threefold novel computational model modeling fiber geometry signal along fibers generalized clustering framework based dictionary learning sparse coding adapted computational models comprehensive comparison models clustering fibers white matter fiber segmentation using functional varifolds modeling fibers using functional varifolds framework functional varifolds fiber assumed polygonal line segments described center point tangent vector centered length respectively fiber segments let signal values center points respectively vector field belonging reproducing kernel hilbert space rkhs fibers modeled based functional varifolds details found inner product metric defined gaussian kernels kernel exp exp kernel bandwidth parameters varifolds computational model using fiber geometry used comparison experiments drop signal values center pppoints thus varifoldsbased representation fibers hence inner product defined hvx exp fiber clustering using dictionary learning sparse coding fiber clustering extend dictionary learning sparse coding based framework presented let set fibers modeled using atom matrix representing dictionary functional varifolds coefficients fiber belonging one bundles cluster membership matrix containing sparse codes fiber instead explicitly representing bundle prototypes bundle expressed linear combination fibers dictionary defined since operation linear defined functional varifolds problem dictionary learning using sparse coding expressed finding matrix bundle prototypes assignment matrix minimize following cost function arg min subject smax parameter smax defines maximum number elements sparsity level provided user input clustering method important advantage using formulation reconstruction error term requires inner product varifolds let gram matrix denoting inner product pairs training fibers qij hvxi vxj matrix calculated stored computations problem reduces linear algebra operations involving matrix multiplications solution obtained alternating sparse coding dictionary update sparse codes fiber updated independently solving following arg min awi subject smax arg min qaw smax weights obtained using kernelized orthogonal matching pursuit komp approach proposed positively correlated atom selected iteration sparse weights obtained solving regression problem note since size bounded smax otained rapidly also case large number fibers nystrom method used approximating gram matrix dictionary update recomputed applying following update scheme convergence aij aij qaw experiments data evaluate different computational models dmri data unrelated subjects females males age human connectome project hcp dsi studio used signal reconstruction mni space streamline tracking employed generate fibers per subject minimum length maximum length generalized fractional anisotropy gfa extends standard fractional anisotropy orientation distribution functions considered measure microstructure report results obtained gfa measure may used parameter impact performed clustering manually selected pairs fibers clusters similar major bundles modeled fibers using different computational models analyzed impact varying kernel bandwidth parameters range parameters estimated observing values distance centers fiber segments difference along tract gfa values selected multiple pairs fibers figure top left shows gfa fibers pairs corresponding right corticospinal tract cst corpus callosum right inferior fasciculus ifof cosine similarity degrees reported fiber pairs modeled using varifolds var functional varifolds fvar figure top left shows gfa fiber pairs visualization reflect variation fiber geometry microstructure measure gfa along fiber difference gfa along fiber select fiber pairs visualization variation difference gfa values along fibers support hypothesis modeling along tract signal along geometry provides additional information change cosine similarity degrees fig gfa visualization cosine similarity pairs fibers three prominent bundles cst ifof using framework varifolds var functional varifolds fvar top left comparing variation cosine similarity select fiber pairs kernel bandwidth parameters framework functional varifolds top right cst middle left middle right ifof impact clustering consistency measured using average silhouette functional varifolds varifolds bottom left functional varifolds gfa bottom right using varifolds degrees using functional varifolds cst degrees degrees reflect drop cosine similarity along tract signal profiles similar shows functional varifolds imposes penalty different along fiber signal profiles figure also compares impact varying kernel bandwidth parameters functional varifolds using similarity angle pairs selected fibers top right cst bottom left bottom right ifof show variation comparing parameter variation images figure observe cosine similarity values parameter space show similar trends pairs fibers observation allows select single pair parameter model fvar var gfa mcp fig mean silhouette obtained varifolds varifolds gfa mcp computed varying number clusters subjects seed values left detailed results obtained subjects using right values experiments used experiments based cosine similarity values figure smaller values make current fiber pairs orthogonal larger values lose discriminative power fiber pairs high similarity quantitative analysis report quantitative evaluation clusterings obtained using functional varifolds fvar varifolds var mcp gfa computational model dictionary learning sparse coding framework applied computational models hcp subjects compute gramian matrix using fibers randomly sampled full brain seed values mcp distance dij calculated fiber pair described gramian matrix obtained using radial basis function rbf kernel kij exp parameter set empirically experiments since evaluation performed unsupervised setting use silhouette measure assess comparing clustering consistency silhouette values range measure similar object cluster cohesion compared clusters separation figure bottom row shows impact clustering consistency functional varifolds varifolds gfa figure right gives average silhouette clusters computed subjects seed values impact using geometry microstructure measures along fibers evaluated quantitatively comparing clusterings based functional varifolds obtained using geometry varifolds mcp signal gfa seen using gfa alone leads poor clusterings reflected negative silhouette values comparing functional varifolds varifolds gfa observe consistently improved performance different numbers clusters validate hypothesis also report average silhouette seed values obtained subjects using results demonstrate functional varifolds give consistently better clustering compared computational models using qualitative visualization figure top row shows dictionary learned single subject using functional varifolds fvar varifolds var mcp distance visualization purposes fiber assigned single cluster represented using unique color second third rows silhouette analyzes clustering consistency signal profile fvar var mcp fig full clustering visualization top row single cluster visualization mid row gfa based color coded visualization selected single cluster bottom row using following computational models fibers functional varifolds left column varifolds middle column mcp distance right column superior axial views note top row figure unique color code figure depict specific cluster corresponding gfa profiles observe three computational models produce plausible clusterings gfa profiles selected cluster correspondence across computational models observe functional varifolds enforce geometric well signal profile similarity moreover clustering produced varifolds mcp using geometric properties fibers similar one another noticeably different functional varifolds conclusion novel computational model called functional varifolds proposed model geometry microstructure measure along fibers considered task fiber clustering integrated functional varifolds model within framework based dictionary learning sparse coding driving hypothesis combining signal fiber geometry helps tractography analysis validated quantitatively qualitatively using data human connectome project results show functional varifolds yield consistent clusterings gfa varifolds mcp study considered fully unsupervised setting investigation would required assess whether functional varifolds augment aid reproducibility results acknowledgements data provided human connectome project references charlier charon fshape framework variability analysis functional shapes foundations computational mathematics charon varifold representation nonoriented shapes diffeomorphic registration siam journal imaging sciences colby soderberg lebel dinov thompson sowell statistics allow enhanced tractography analysis neuroimage corouge gouttard gerig towards shape model white matter fiber bundles using diffusion tensor mri isbi ieee gori colliot worbe fallani chavez lecomte poupon hartmann ayache prototype representation approximate white matter bundles weighted currents miccai springer hagmann jonasson maeder thiran wedeen meuli understanding diffusion imaging techniques scalar imaging diffusion tensor imaging beyond radiographics suppl kumar desrosiers sparse coding approach efficient representation segmentation white matter fibers isbi ieee kumar desrosiers siddiqi brain fiber clustering using kernelized matching pursuit machine learning medical imaging lncs vol kumar desrosiers siddiqi colliot toews fiberprint subject fingerprint based sparse code pooling white matter fiber analysis neuroimage maddah grimson warfield wells unified framework clustering quantitative analysis white matter fiber tracts medical image analysis moberts vilanova van wijk evaluation fiber clustering methods diffusion tensor imaging vis ieee donnell westin golby morphometry white matter group analysis neuroimage siless medina varoquaux thirion comparison metrics algorithms fiber clustering prni ieee van essen smith barch behrens yacoub ugurbil consortium human connectome project overview neuroimage wang yap shen application neuroanatomical features tractography clustering human brain mapping wassermann bloy kanterakis verma deriche unsupervised white matter fiber clustering tract probability map generation applications gaussian process framework white matter fibers neuroimage yeatman dougherty myall wandell feldman tract profiles white matter properties automating quantification plos one yeh tseng high angular resolution brain atlas constructed diffeomorphic reconstruction neuroimage | 1 |
oct slower faster carlos dirk instituto investigaciones aplicadas sistemas universidad nacional cgg http centro ciencias complejidad unam senseable city lab massachusetts institute technology usa mobs lab northeastern university usa itmo university petersburg russian federation department humanities social political sciences gess eth http october abstract slower faster sif effect occurs system performs worse components try better thus moderate individual efficiency actually leads better systemic performance sif effect takes place variety phenomena review studies examples sif effect pedestrian dynamics vehicle traffic traffic light control logistics public transport social dynamics ecological systems adaptation drawing examples generalize common features sif effect suggest possible future lines research introduction fast athlete run race goes fast burn become tired finishing runs conservatively get tired make best time minimize race time fast possible without burning goes faster actually race slowly example sif effect order run faster sometimes necessary run slower burn trivial calculate running speed lead best race depends athlete race distance track temperature humidity daily performance running dash done fast running marathon demands carefully paced race fast would athlete run marathon started speed finish marathon successfully would obviously run slowly several examples sif effect described next section generalize common features phenomena discuss potential causes promising lines research towards unified explanation sif effect examples pedestrian evacuation perhaps first formal study sif effect related pedestrian flows helbing modelling crowds like particles social forces interacting among helbing helbing shown individuals try evacuate room quickly lead intermittent clogging reduced outflow compared calmer evacuation context sif effect also known freezing heating stanley trying exit fast makes pedestrians slower calmer people manage exit faster led people suggest obstacles close exits precisely reduce friction helbing counterintuitively slowdown evacuation increase outflow also related study aircraft evaluation found critical door width determines whether competitive evacuation increase decrease evacuation time kirchner words pushy people evacuate slower narrow doors sif evacuate faster doors wide enough fif pedestrians crossing road another example concerns mixed pedestrian vehicle traffic imagine pedestrians trying cross road location traffic light pedestrian crossing marked typical situation along roads speed limit shared spaces use pedestrians would cross gap two successive vehicles exceeds certain critical separation ensures safe crossing road however two types pedestrians patient pushy ones pushy pedestrians might force vehicle slow patient pedestrians would would wait larger gap surprisingly pedestrians patient type average would wait shorter time period jiang sif effect come pushy pedestrian slowed vehicle arriving pedestrians pass road takes long time pedestrians arrive stopped cars accelerate waiting time however long vehicle queue formed large enough gap cross road occurs vehicles entire vehicle queue dissolved consequence pedestrians wait long time cross altogether better pedestrians wait large enough gaps force vehicles slow vehicle traffic sif effects also known vehicle traffic helbing huberman helbing treiber helbing helbing nagel surprisingly speed limits sometimes reduce travel times case traffic density enters metastable regime traffic flow sensitive disruptions may break causes largely increased travel times speed limit delay breakdown fluid traffic flows reduces variability vehicle speeds homogenization avoids disturbances flow big enough trigger breakdown amplitude vehicles fast safety distance vehicles must increased thus less vehicles able use road example maximum capacity vehicles per per lane reached free traffic flow breaks capacity reduced vehicles per per lane vehicles slow due increased density traffic jams propagate following car tends brake vehicle ahead phase transition stable unstable flow traffic depends desired speed thus maximize flow optimal speed highway depend current density however maximum flow lies tipping point thus small perturbation trigger waves reduce highway capacity similar consideration applies maneuvers kesting pushy drivers might force cars neighboring lane slow changing lanes overtake another car patient drivers would consequence pushy drivers may cause disruption metastable traffic flow may trigger breakdown capacity drop consequently patient drivers avoid delay breakdown traffic flow thereby managing progress faster average one may also formulate game theoretical terms traffic flow metastable drivers faced social dilemma situation choosing patient behavior beneficial everyone pushy behavior produce small individual advantages cost drivers consequence tragedy commons results pushy drivers undermine stability metastable traffic flow causing congestion forces everyone spend time travel complementary phenomenon observed braess paradox braess steinberg zangw adding roads reduce flow capacity road network traffic light control sif effect also found systems urban traffic light control helbing mazloumian approach works well low traffic volumes otherwise forcing vehicles wait time speed overall progress reason produce vehicle platoons green light efficiently serve many vehicles short time period gershenson gershenson rosenblueth zubillaga similarly may better switch traffic lights less frequently switching reduces service times due time lost amber lights green wave coordination vehicle flows several successive traffic lights passed without stopping another good example demonstrating waiting red light may rewarding altogether similarly interesting observations made traffic light control based decentralized flow control distributed control helbing helbing intersection strictly minimizes travel times vehicles approaching according principle homo economicus create efficient traffic flows low moderate invisible hand phenomenon however vehicle queues might get hand intersection utilization increases therefore beneficial interrupt travel time minimization order clear vehicle queue grown beyond certain critical limit avoids spillover effects would block intersections cause quick spreading congestion large parts city consequently waiting long queue cleared speed traffic altogether putting differently beat selfish optimization homo economicus strictly best neglects coordination neighbors logistics supply chains similar phenomena urban traffic flows found logistic systems supply chains helbing helbing seidel peters studied example case harbor logistics using automated guided vehicles container transport proposal reduce speed vehicles reduced required safety distances vehicles less conflicts movement occurred automatic guided vehicles wait less way transportation times could overall reduced even though movement times obviously increased made similar observation semiconductor production wet benches used etch structures silicium wavers using particular chemical solutions achieve good results wavers stay chemical baths longer minimum shorter maximum time period therefore might happen several silicium wavers need moved around time moving gripper handler must make sure stay within minimum maximum times turns slightly extending exposure time chemical bathes enables much better coordination movement processes thereby reaching percent higher throughput third logistics project throughput packaging plant increased one central production machines plant frequently broke operated full speed whenever operating well however filled buffer production plant extent made operation inefficient effect understood queuing theory according cycle times dramatically increase capacity buffer approached public transport public transportation systems desirable equal headways vehicles buses reach regular time separations vehicles however equal headway configuration unstable gershenson pineda forcing equal headways minimizes waiting times stations nevertheless travel time independent waiting time equal headways imply idling leaving passengers stations different demand vehicle station still used regulate headways adaptively gershenson considering local information vehicles able respond adaptively immediate demand station method also sif effect passengers wait time station reach destination faster board vehicle idling necessary maintain equal headways social dynamics axelrod axelrod proposed interesting model opinion formation model agents may change opinion depending opinion neighbors eventually opinions converge stable state however agents switch opinion fast might delay convergence stark thus sif effect fastest convergence necessarily obtained fastest opinion change model also phase transition probably related optimal opinion change rate vilone also experimental evidence sif effect group decisions designing new buildings slowing deliberative process teams accelerates design construction buildings cross extrapolating results one may speculate financial trading narang may also produce sif effect sense trading microseconds scale generates price information fluctuations could generate market instabilities leading crashes slower economic growth easley combinatorial game theory siegel sometimes best possible move taking queen chess necessarily best move long term words highest possible gain move give necessarily best game result russell norvig ecology predator consumes prey fast prey consume predator population decline thus prudent predator slobodkin goodnight actually spread faster greedy one similar sif effect applies relationships parasites taking many resources host causing demise dunne long timescales evolution favor symbiotic parasitic relationships promoting mechanisms cooperation regulate interaction different individuals sachs virgo see principle applies natural resource management fisheries pauly catches excessive enough fishes left maintain numbers subsequent catches poor estimated apart ecological impact overfishing left void billion per year due reduced catches toppe however regulating much fish caught per year complicated maximum sustainable yield varies species species maunder calculation optimal yields per year trivial task adaptation evolution development learning seen different types adaptation acting different timescales aguilar also adaptation seen type search downing computational searches known needs balance exploration exploitation blum roli algorithm explore different possible solutions exploit solutions similar already found much exploration much exploitation lead longer search times much breadth exploration explore slightly different types solution much depth exploitation might lead local optima data overfitting key problem precise balance exploration diversification exploitation intensification depends precise search space wolpert macready timescale gershenson watson example sif described biological evolution sellis haploid species single copy genome bacteria adapt faster diploid species two copies genome plants animals still fastchanging environment haploids adapt fast population loses genome variation diploids maintain diversity diversity diploids adapt faster changes environment begin evolutionary search many different states principle would desirable find solution fast possible exploiting current solutions still mentioned might lead suboptimality sif evolving new features optimizing multidimensional function training neural network efficient search eventually slow known simulated annealing much exploration would suboptimal also critical question find precise balance speed search much possible computationally seems question reducible wolfram know posteriori precise balance given problem still finding balance would necessary adiabatic quantum computation farhi aharonov system evolves fast information destroyed generalization examples common described complex dynamical systems composed many interacting components cases system least two different states efficient inefficient one unfortunately efficient state unstable system tend end inefficient state case freeway traffic example well known efficient state highest throughput unstable thereby causing traffic flow break sooner later capacity drop avoid undesired outcome system components must stay sufficiently away instability point requires somewhat slower could reward able sustain relatively high speed long time faster efficient state break trigger another one typically slower situation might characterized tragedy commons hardin even though might counterintuitive sif effect occurs broad variety systems practical purposes many systems monotonic relation inputs outputs true systems break ashby example temperature increased constrained gas constant volume pressure rises yet temperature increases much gas container break leading pressure reduction still without breaking many physical systems thresholds become unstable phase transition different systems state occurs typical situation systems may get overloaded turn dysfunctional cascading effect reduce sif effect seek adjust interactions cause reduction system performance gershenson vehicle traffic case offers interesting example vehicles fast density crosses critical density changes speed affect vehicles generating amplification oscillations lead traffic consequence reduced average speed vehicles slower oscillations avoided average speed higher key critical speed traffic flow changes laminar fif unstable sif changes density however suitably designed adaptive systems driver assistance systems used drive systems towards best possible performance respective context gershenson helbing discussion could argued sif effect overly simplistic requirement two dynamical phases one comes reduced efficiency crossing phase transition point still presented sif effect shows variety interesting phenomena different scales thus say better understanding sif effect useful potentially broad impact challenge lies characterizing nature different types interactions reduce efficiency gershenson identify following necessary conditions sif effect instability internal external system instability amplified sometimes cascading effects transition unstable new stable state leads inefficiency state characterized overloaded worth noting cases single variables may stable perturbations interactions ones trigger instability implies sif cases studied two scales scale components scale system studying components isolation provide enough information reproduce sif effect whether phenomena sif effect described mathematical framework remains seen believe avenue research worth pursuing relevant implications understanding complex systems acknowledgments like thank luis icaza jeni cross tom froese marios kyriazis gleb oshanin sui phang frank schweitzer diamantis sellis simone severini thomas wisdom zenil two anonymous referees useful comments supported conacyt projects sni membership supported erc advanced grant momentum references aguilar bonfil froese gershenson past present future artificial life frontiers robotics url http aharonov van dam kempe landau lloyd regev adiabatic quantum computation equivalent standard quantum computation siam review url http ashby nervous system physical machine special reference origin adaptive behavior mind january url http axelrod dissemination culture model local convergence global polarization journal conflict resolution url http blum roli metaheuristics combinatorial optimization overview conceptual comparison acm comput surv url http braess nagurney wakolbinger paradox traffic planning transportation science november translated original german braess dietrich ein paradoxon aus der verkehrsplanung unternehmensforschung url http cross barr putnam dunbar plaut social network integrative design tech institute built environment colorado state university fort collins usa downing intelligence emerging adaptivity search evolving neural systems mit press cambridge usa dunne lafferty dobson hechinger kuris martinez mclaughlin mouritsen poulin reise stouffer thieltges williams zander parasites affect food web structure primarily increased diversity complexity plos biol url http easley prado hara microstructure flash crash flow toxicity liquidity crashes probability informed trading journal portfolio management winter url http farhi goldstone gutmann sipser tum computation adiabatic evolution tech mit http quanurl gershenson traffic lights complex systems url http gershenson design control systems copit arxives mexico http url http gershenson computing networks general framework contrast neural swarm cognitions paladyn journal behavioral robotics url http gershenson leads supraoptimal performance public transportation systems plos one url http gershenson sigma profile formal tool study organization evolution multiple scales complexity url http gershenson implications interactions science philosophy foundations science url http gershenson pineda public transport arrive time pervasiveness equal headway instability plos one url http gershenson rosenblueth traffic lights intersections complexity url http goodnight rauch sayama aguiar baranger evolution spatial models prudent predator inadequacy organism fitness concept individual group selection http complexity url hardin tragedy commons science url http helbing traffic related systems reviews modern physics helbing economics natural step towards participatory market society evolutionary institutional economics review helbing thinking big data digital revolution participatory market society springer helbing buzna johansson werner pedestrian crowd dynamics experiments simulations design solutions transportation science helbing farkas vicsek simulating dynamical features escape panic nature helbing farkas vicsek ing driven mesoscopic system phys rev lett http freezing url helbing huberman coherent moving states highway traffic nature url http helbing supply production networks bullwhip effect business cycles networks interacting machines production organization complex industrial systems biological cells armbruster mikhailov kaneko world scientific singapore helbing mazloumian operation regimes effect controlof traffic intersections european physical journal condensed matter complex systems url http helbing social force model pedestrian dynamics physical review helbing nagel physics traffic gional development contemporary physics http reurl helbing seidel peters principles supply networks production systems econophysics sociophysics chakrabarti chakraborti chatterjee wiley weinheim url http helbing treiber jams waves clusters science url http ehtamo helbing korhonen patient impatient pedestrians spatial game egress congestion phys rev url http jiang helbing shukla inefficient emergent oscillations intersecting driven flows physica statistical mechanics applications url http kesting treiber helbing general model mobil models transportation research record journal transportation research board kirchner nishinari schadschneider schreckenberg simulation competitive egress behavior comparison aircraft evacuation data physica statistical mechanics applications url http helbing traffic lights vehicle flows urban road networks stat mech url http helbing decentralized signal control realistic saturated network traffic tech santa institute kori peters helbing decentralised control material traffic flows networks using physica april url http maunder relationship fishing methods fisheries management estimation maximum sustainable yield fish fisheries url http narang trading inside black box simple guide quantitative trading john wiley sons hoboken usa url http pauly christensen dalsgaard froese torres fishing marine food webs science url http peters seidel helbing logistics networks coping nonlinearity complexity managing complexity insights concepts applications helbing springer berlin heidelberg url http russell norvig artificial intelligence modern approach prentice hall upper saddle river new jersey sachs mueller wilcox bull evolution cooperation quarterly review biology url http seidel hartwig sanders helbing agentbased approach production swarm intelligence introduction applications blum merkle springer berlin url http sellis callahan petrov messer heterozygote advantage natural consequence adaptation diploids proceedings national academy sciences url http siegel combinatorial game theory american mathematical society slobodkin growth regulation animal populations holt reinhart winston new york stanley physics freezing heating nature url http stark tessone schweitzer decelerating microdynamics accelerate macrodynamics voter model phys rev lett url http stark tessone schweitzer slower faster fostering consensus formation heterogeneous inertia advances complex systems steinberg zangwill prevalence braess paradox transportation science url http toppe hasan josupeit subasinghe halwart james aquatic biodiversity sustainable diets role aquatic foods food nutrition security sustainable diets biodiversity directions solutions policy research action burlingame dernini fao rome url http vilone vespignani castellano ordering phase transition axelrod model european physical journal condensed matter complex systems url http virgo froese ikegami positive role parasites origins life artificial life alife ieee symposium ieee url http watson mills buckley global adaptation networks selfish components emergent associative memory system scale artificial life url http wolfram new kind sciene http wolfram media url wolpert macready free lunch theorems search tech santa institute url http wolpert macready free lunch theorems optimization ieee transactions evolutionary computation zubillaga cruz aguilar aguilar rosenblueth gershenson measuring complexity traffic lights entropy url http | 9 |
rank deformations hyperbolic lattices feb samuel ballas julien paupert pierre february abstract let negatively curved symmetric space lattice isom show small deformations isometry group negatively curved symmetric space containing remain discrete faithful cocompact case due guichard applies particular version bending deformations providing infnitely many noncocompact lattices admit discrete faithful deformations also produce deformations knot group bending type result applies introduction paper concerns aspect deformation theory discrete subgroups lie groups namely lattices rank semisimple lie groups specifically consider following questions given discrete subgroup rank lie group admit deformations deformations nice properties remain discrete faithful replace larger lie group call deformation continuous family representations interval satisfying inclusion conjugate say locally rigid admit deformations semisimple real lie group without compact factors variety general local rigidity results outline weil proved locally rigid compact locally isomorphic garland raghunathan extended result case lattice rank semisimple group locally isomorphic theorem exclusion necessary generically lattices admit many deformations identification psl allow relate lattices hyperbolic structures turn parameterized classical space surface group case deformations discrete subgroup also classical well understood bers simultaneous uniformization theorem bers setting psl identified discrete group gives rise hyperbolic structure manifold hyperbolic surface deforming corresponds deforming hyperbolic structure deformations abundant according bers simultaneous uniformization parameterized cartesian product two copies classical space surface notice existence deformations violate weil result compact situation generalized case hnr isom setting lattice gives rise hyperbolic structure hnr regarding subgroup gives rise hyperbolic structure deformations correspond deforming hyperbolic structure general setting general theorem guarantees existence deformations however deformation problem studied scannell bsc kapovich kap prove rigidity results returning case many lattices known admit deformations particular thurston showed cusp exists real family deformations called dehn surgery deformations see section geometrically families commuting pair parabolic isometries generating correspdonding cusp group deformed pair loxodromic isometries sharing common axis particular deformations orbifold existence deformations depends subtly topology cusp another case interest context deformations geometric structures projective deformations hyperbolic lattices deformations lattices cocompact lattice hyperbolic manifold hnr contains embedded totally geodesic hypersurface johnson millson showed admits family deformations obtained deformations called bending deformations along introducing algebraic version thurston bending deformations hyperbolic along totally geodesic surface algebraic version versatile generalized variety ways example hypothesis compact may dropped see furthermore construction applied setting lie groups provide rich source examples discussed section addition deformations constructed via bending also instances projective deformations arise via previously mentioned bending technique see bdl hand despite existence bending examples empirical evidence complied cltii suggests existence deformations quite rare closed hyperbolic another direction complex hyperbolic deformations fuchsian groups also extensively studied see survey references therein notation concerns deformations discrete subgroups recall lie groups isomorphic index turns work clt intricate relationship projective deformations complex hyperbolic deformations finitely generated subgroups based fact lie algebras isomorphic modules group ring specifically prove theorem clt let finitely generated group let smooth point representation variety hom also smooth point hom near real dimensions hom hom equal primary motivation article construct examples complex hyperbolic deformations real hyperbolic lattices nice algebraic geometric properties main result roughly described providing sufficient condition deformation lattice continue faithful discrete image follows condition roughly means parabolic elements remain parabolic see definition precise statement theorem let negatively curved symmetric space totally geodesic subspace denote isom stabg let lattice let denote inclusion representation sufficiently close discrete faithful remarks dehn surgery deformations lattices described either indiscrete showing necessity assumption general pointed elisha falbel theorem still holds proof weaker hypothesis subgroup lattice global fixed point lattice equivalent context see saying thin subgroup lattice subgroup lattice cocompact lattice result consequence following result guichard theorem gui let semisimple lie group finite center rank subgroup finitely generated discrete subgroup denote inclusion map neighborhood hom consisting entirely discrete faithful representations prove theorem section apply section family deformations knot group denoting knot hyperbolic representation holonomy representation complete hyperbolic structure obtain theorem let knot group hyperbolic representation exists family discrete faithful deformations section apply theorem variation bending deformations obtain following result given hyperbolic manifold hnr call hyperbolic representation holonomy representation complete hyperbolic structure conjugation mostow rigidity theorem exist infinitely many cusped hyperbolic whose corresponding hyperbolic representation admits family discrete faithful deformations two groups commensurable wide sense finite index incommensurability conclusion ensures dimension manifolds theorem quite distinct sense obtained taking covering spaces single example discreteness faithfulness deformations section prove theorem stated introduction strategy proof case use invariant horospheres precisely variation schwartz called neutered space see definition cartan classification real semisimple lie groups negatively curved symmetric space hyperbolic space hnk refer reader general properties spaces isometry groups particular isometries spaces roughly classified following types elliptic fixed point parabolic fixed point exactly one loxodromic fixed point exactly two purposes need distinguish elliptic isometries isolated fixed point call elliptic elliptic isometries boundary fixed points call boundary elliptic definition let negatively curved symmetric space isom subgroup representation called every parabolic resp boundary elliptic element parabolic resp boundary elliptic remark parabolic subgroup subgroup fixing point parabolicpreserving representation faithful indeed elements parabolic boundary elliptic lemma let discrete subgroup containing parabolic element denote fix representation preserves horosphere based fix proof well known first parabolic boundary elliptic isometries fixed point preserve horosphere based secondly discrete group hyperbolic isometries loxodromic parabolic elements common fixed point therefore consists parabolic possibly boundary elliptic isometries likewise thing remains seen representation fixes fix follows fact pairs isometries common fixed boundary point characterized algebraically namely assumption parabolic common fixed boundary point group virtually nilpotent property preserved representation definition let negatively curved symmetric space isom subgroup subgroup say closed horoball following conditions hold definition given two disjoint horoballs call orthogeodesic pair unique geodesic segment endpoints boundary horospheres perpendicular horospheres note unique geodesic segment call set points endpoints geodesic ray perpendicular intersecting shadow remark since geodesics othogonal exactly geodesics vertex endpoint shadow intersection geodesic cone lemma given two disjoint horoballs orthogeodesic shadow intersection closed ball centered proof note isometry fixing geodesic pointwise preserves hence shadow rotational symmetry around statement follows observing shadow closed bounded clear upper model hnr related siegel domain models hyperbolic spaces horospheres based special point horizontal slices domain geodesics vertical lines see complex case quaternionic case proposition let negatively curved symmetric space isom discrete subgroup subgroup assume exists horoball acts cocompactly horosphere set lengths orthogeodesics pairs discrete values attained finitely many times modulo action proof first note let compact subset whose covers orbit closed since single cusp neighborhood closed denoting projection map hence distance compact set closed set positive attained say point point horoball claim finitely many horospheres realize minimum see suffices show horosphere based intersects finitely many horospheres fix horosphere based consider horosphere intersects let shadow lemma intersection closed ball exists depending radius least indeed radius shadow horosphere tangent consider two horospheres assume disjoint call centers shadows claim distance least consider geodesic connecting since center intersection geodesic ray connecting highest point endpoint disjoint compact geodesic segment contained permuting roles gives opposite situation geodesic connecting move continuously along curve consider associated pencil geodesics connecting see must value intersect contradicting disjointness finally since compact subset whose cover horosphere apply element maps center point meets infinite number classes obtain way sequence distinct points must accumulate compactness consistency corresponding horospheres disjoint previous discussion tells distance centers shadows uniformly bounded contradiction result follows inductively repeating argument removing first layer closest horoballs remark hypothesis cusp stabilizer acts cocompactly horosphere based cusp holds lattice fact holds generally discrete group maximal rank parabolic subgroup proposition let negatively curved symmetric space totally geodesic subspace denote isom stabg lattice representation sufficiently close inclusion exists horoball cusp stabilizer proof since contains parabolic isometry let fix exists horoball based seen lifting embedded horoball neighborhood image quotient see lemma since totally geodesic intersection horoball representation lemma condition definition pair follows proposition sufficiently close horoballs stay disjoint long finite subcollection note since totally geodesic horoballs convex distance beween horoballs based points given distance hence condition definition pair holds sufficiently close proposition let negatively curved symmetric space denote isom let subgroup without global fixed point exists horoball subgroup discrete proof first assume simplicity preserve proper totally geodesic subspace either discrete dense corollary dense orbit point dense horoball orbit point entirely contained case dense nonempty interior therefore must discrete preserve strict totally geodesic subspace minimal subspace argument either discrete every orbit point dense consistent horoball must based point since preserved elements hence intersects along horoball conclude lemma let negatively curved symmetric space denote isom let subgroup subgroup representation exists horoball faithful proof let condition definition horoball theorem follows immediately propositions lemma deformations knot group section construct family deformations hyperbolic representation knot group consider knot denote holonomy complete hyperbolic structure recall presence smoothness hypothesis relevant representation varieties theorem implies existence deformations guarantees existence deformations work bdl shows smoothness hypothesis guaranteed presence cohomological condition specifically prove following theorem bdl let orientable complete finite volume hyperbolic manifold fundamental group let holonomy representation complete hyperbolic structure infinitesimally projectively rigid rel boundary smooth point hom conjugacy class smooth point roughly speaking infinitesimally projectively rigid rel boundary cohomological condition says certain induced map twisted cohomology twisted cohomology injection precise definition see work known knot complement infinitesimally rigid rel boundary apply theorems produce deformations however reason representations many cases deformations property fortunately work first author see provides family deformations whose corresponding deformations parabolic preserving theorem let knot group exists family discrete faithful deformations construction family found ultimately constructs curve representations containing hyperbolic representation fact allowing parameter take complex values gives parameter family representations moreover turns taking unit complex number gives family representations reason choice value parameter eigenvalues one peripheral elements power see section give explicit matrices generators hermitian form family using presentation notation section following presentation used wni family representations defined group preserves hermitian form given lemma form signature signature proof computing determinant gives det cos latter function negative positive result follows noting signature corresponding hyperbolic representation lemma representations pairwise proof straightforward computation gives lemma representations proof peripheral subgroup generated wwop notation presentation see unipotent straightforward computation using maple shows eigenvalues hence parabolic since elements also remain parabolic neighborhood previous result along theorem following immediate corollary corollary representations discrete faithful neighborhood would interesting know far get discreteness faithfulness lost bending deformations section construct additional examples arbitrary dimensions proving theorem stated introduction start cusped hyperbolic manifold hnr hyperbolic representation holonomy representation complete hyperbolic structure construct family representations using bending procedure described construction quite general allows one deform representations variety lie groups briefly outline use bending produce families representations complex hyperbolic setting define hermitian form via formula diagonal matrix diag signature using form produce projective model hnc given hnc using splitting embed corresponding second factor refer copy using embedding identify intersection stabilizer second factor refer subgroup well known copies inside hnc isometric similarly copies inside conjugate let denote identity component centralizer onedimensional lie group isomorphic written explicitly block form let lattice hnr finite volume hyperbolic simplicity assume thus hyperbolic manifold suppose contains embedded orientable totally geodesic hypersurface applying conjugacy assume thought set real points lattice hypersurface provides decomposition either amalgamated free product hnn extension depending whether separating using decomposition construct family inclusion follows separating consists two connected components fundamental groups respectively case group generated define generating set since centralizes see relations coming amalgamated product decomposition satisfied well defined connected let fundamental group arrive decomposition case generated free letter define generators since centralizes see relations hnn extension satisfied well defined representations constructed called bending deformations along bending deformations clear context work path representations fact deformation pairwise small values proof proof theorem proceed constructing infinitely many commensurability classes cusped hyperbolic manifolds containing totally geodesic hypersurfaces done via well known arithmetic construction see ber rough idea look group integer points orthogonal groups various carefully selected quadratic forms signature quotient hnr cusped hyperbolic containing totally geodesic hypersurface passing carefully selected cover produce parabolic preserving representations via bending construction discuss details specific form observe proof essentially unchanged one group clearly selects different form let let contains unipotent elements see hnr cusped hyperbolic contains immersed totally geodesic suborbifold isomorphic combining work bergeron ber proposition mrs find finite corresponding manifolds index subgroups following properties embedded torus cusps contains totally geodesic hypersurface along bend produce family representations show representatons theorem discrete faithful small values lemma representations obtained bending along proof construction arranged elliptic elements consider furthermore parabolic elements correspond loops freely homotopic one torus cusps discuss element modified one bends let parabolic element let fixed point hnc foliation hnc horospheres centered preserves foliation leafwise furthermore leafwise preservation foliation characterizes parabolic isometries hnc fix thus suffices show preserves foliation regard loop based lift path hnc based let lift contains time intersects lift hnc counted orientation holonomy modified composing heisenberg rotation angle centered acts identity modifications element leafwise preserves foliation horospheres centered also preserves foliation leafwise thus parabolic specifically let two cases unipotent parabolic conjugate isometry whose angle rotation see apanasov detailed description case remark well known see generally complement knot contain embedded totally geodesic hypersurface therefore deformations produced theorem distinct produced theorem references apanasov bending deformations complex hyperbolic surfaces reine angew math ballas deformations noncompact projective manifolds algebr geom topol ballas finite volume properly convex deformations knot geom dedicata bdl ballas danciger lee convex projective structures preprint arxiv ballas long constructing thin subgroups commensurable knot group algebr geom topol ballas marquis properly convex bending hyperbolic manifolds preprint arxiv bsc bart scannell generalized cuspidal cohomology problem canad math ber bergeron premier nombre betti spectre laplacien certaines hyperboliques enseign math bers bers simultaneous uniformization bull amer math soc chen greenberg hyperbolic spaces contributions analysis academic press new york clt cooper long thistlethwaite flexing closed hyperbolic manifolds geom topol cltii cooper long thistlethwaite computing varieties representations hyperbolic experimental math vol garland raghunathan fundamental domains lattices rank semisimple lie groups ann math goldman complex hyperbolic geometry oxford mathematical monographs oxford university press gps gromov groups lobachevsky spaces publ math ihes gui guichard groupes dans goupe lie math ann hatcher thurston incompressible surfaces knot complements invent math heusener porti infinitesimal projective rigidity dehn filling geom topol johnson millson deformation spaces associated compact hyperbolic manifolds papers honor mostow sixtieth birthday roger howe progress mathematics kap kapovich deformations representations discrete subgroups math annalen kim parker geometry quaternionic hyperbolic manifolds math proc camb phil soc parker platis complex hyperbolic groups geometry riemann surfaces london mathematical society lecture notes mrs mcreynolds reid stover collisions infinity hyperbolic manifolds math proc cambridge philos soc scannell local rigidity hyperbolic dehn surgery duke math schwartz classification rank one lattices publ math ihes schwartz complex hyperbolic triangle groups proceedings international congress mathematicians vol beijing higher press beijing thurston geometry topology electronic version available http weil discrete subgroups lie groups ann math samuel ballas department mathematics florida state university ballas julien paupert school mathematical statistical sciences arizona state university paupert pierre institut fourier grenoble | 4 |
published conference paper iclr importance single directions generalization mar ari david barrett neil rabinowitz matthew botvinick deepmind london arimorcos barrettdavid ncr botvinick bstract despite ability memorize large datasets deep neural networks often achieve good generalization performance however differences learned solutions networks generalize remain unclear additionally tuning properties single directions defined activation single unit linear combination units response input highlighted importance evaluated connect lines inquiry demonstrate network reliance single directions good predictor generalization performance across networks trained datasets different fractions corrupted labels across ensembles networks trained datasets unmodified labels across different hyperparameters course training dropout regularizes quantity point batch normalization implicitly discourages single direction reliance part decreasing class selectivity individual units finally find class selectivity poor predictor task importance suggesting networks generalize well minimize dependence individual units reducing selectivity also individually selective units may necessary strong network performance ntroduction recent work demonstrated deep neural networks dnns capable memorizing extremely large datasets imagenet zhang despite capability dnns practice achieve low generalization error tasks ranging image classification language translation observations raise key question networks generalize others answers questions taken variety forms variety studies related generalization performance flatness minima bounds hochreiter schmidhuber keskar neyshabur dziugaite roy though recent work demonstrated sharp minima also generalize dinh others focused information content stored network weights achille soatto still others demonstrated stochastic gradient descent encourages generalization bousquet elisseeff smith wilson use ablation analyses measure reliance trained networks single directions define single direction activation space activation single unit feature map linear combination units response input find networks memorize training set substantially dependent single directions difference preserved even across sets networks identical topology trained identical data different generalization performance moreover found networks begin overfit become reliant single directions suggesting metric could used signal early stopping corresponding author arimorcos published conference paper iclr also show networks trained batch normalization robust cumulative ablations networks trained without batch normalization batch normalization decreases class selectivity individual feature maps suggesting alternative mechanism batch normalization may encourage good generalization performance finally show despite focus selective single units analysis dnns neuroscience zhou radford britten class selectivity single units poor predictor importance network output pproach study use set perturbation analyses examine relationship network generalization performance reliance upon single directions activation space use measure class selectivity compare selectivity individual directions across networks variable generalization performance examine relationship class selectivity importance ummary models datasets analyzed analyzed three models layer mlp trained mnist convolutional network trained residual network trained imagenet experiments relu nonlinearities applied layers output unless otherwise noted batch normalization used convolutional networks ioffe szegedy imagenet resnet accuracy used cases partially corrupted labels zhang used datasets differing fractions randomized labels ensure varying degrees memorization create datasets given fraction labels randomly shuffled assigned images distribution labels maintained true patterns broken erturbation analyses ablations measured importance single direction network computation asking network performance degrades influence direction removed remove single direction clamped activity direction fixed value ablating direction ablations performed either single units mlps entire feature map convolutional networks brevity refer critically ablations performed activation space rather weight space generally evaluate network reliance upon sets single directions asked network performance degrades influence increasing subsets single directions removed clamping fixed value analogous removing increasingly large subspaces within activation space analysis generates curves accuracy function number directions ablated reliant network activation subspaces quickly accuracy drop single directions ablated interestingly found clamping activation unit empirical mean activation across training testing set damaging network performance clamping activation zero see appendix therefore clamped activity zero ablation experiments addition noise analyses perturb units individually measure influence single directions test networks reliance upon random single directions added gaussian noise units zero mean progressively increasing variance scale variance appropriately unit variance noise added normalized empirical variance unit activations across training set uantifying class selectivity quantify class selectivity individual units used metric inspired selectivity indices commonly used systems neuroscience valois britten freedman published conference paper iclr figure memorizing networks sensitive cumulative ablations networks trained mnist layer mlp convolutional network imagenet resnet units layers ablated feature maps last three layers ablated error bars represent standard deviation across random orderings units ablate assad mean activity first calculated across test set selectivity index calculated follows selectivity representing highest mean activity representing mean activity across classes convolutional feature maps activity first averaged across elements feature map metric varies meaning unit average activity identical classes meaning unit active inputs single class note metric perfect measure information content single units example unit little information every class would low class selectivity index however measure discriminability classes along given direction selectivity index also identifies units class tuning properties highlighted analysis dnns zeiler fergus coates zhou radford however addition class selectivity replicate results using mutual information contrast class selectivity highlight units information multiple classes find qualitively similar outcomes appendix also note class viewed highly abstract feature implying results may generalize feature selectivity examine feature selectivity work xperiments eneralization provide rough intuition network reliance upon single directions might related generalization performance consider two networks trained large labeled dataset underlying structure one networks simply memorizes labels input example definition generalize poorly memorizing network learns structure present data generalizes well network minimal description length model larger memorizing network structurefinding network result memorizing network use capacity network extension single directions therefore random single direction perturbed probability perturbation interfere representation data higher memorizing network assuming memorizing network uses fraction capacity published conference paper iclr figure memorizing networks sensitive random noise networks trained mnist layer mlp convolutional network noise scaled empirical variance unit training set error bars represent standard deviation across runs log scale figure networks generalize poorly reliant single directions networks identical topology trained unmodified cumulative ablation curves best worst networks generalization error error bars represent standard deviation across models random orderings feature maps per model area cumulative ablation curve normalized function generalization error test whether memorization leads greater reliance single directions trained variety network types datasets differing fractions randomized labels evaluated performance progressively larger fractions units ablated see sections definition curves must begin network training accuracy approximately networks tested fall chance levels directions ablated rule variance due specific order unit ablation experiments performed mutliple random ablation orderings units many models trained datasets corrupted labels definition generalize training accuracy used evaluate model performance consistent intuition found networks trained varying fractions corrupted labels significantly sensitive cumulative ablations trained datasets comprised true labels though curves always perfectly ordered fraction corrupted labels fig next asked whether effect present networks perturbed along random bases test added noise unit see section found networks trained corrupted labels substantially consistently sensitive noise added along random bases trained true labels fig results apply networks forced memorize least portion training set way solve task however unclear whether results would apply networks trained uncorrupted data words solutions found networks topology data different generalization performance exhibit differing reliance upon single directions test trained networks evaluated generalization error reliance single directions networks topology published conference paper iclr trained dataset unmodified individual networks differed random initialization drawn identical distributions data order used training found networks best generalization performance robust ablation single directions networks worst generalization performance fig quantify measured area ablation curve networks plotted function generalization error fig interestingly networks appeared undergo discrete regime shift reliance upon single directions however effect might caused degeneracy set solutions found optimization procedure note also negative correlation present within clusters top left cluster results demonstrate relationship generalization performance single direction reliance merely training corrupted labels instead present even among sets networks identical training data eliance single directions signal model selection relationship raises intriguing question single direction reliance used estimate generalization performance without need test set might used signal early stopping hyperpameter selection experiment early stopping trained mlp mnist measured area cumulative ablation curve auc course training along train test loss interestingly found point training auc began drop point train test loss started diverge fig furthermore found auc test loss negatively correlated spearman correlation fig experiment hyperparameter selection trained models different hyperparemeter settings hyperparameters repeats see appendix found auc test accuracy highly correlated spearman correlation fig performing random subselections hyperparameter settings auc selected one top settings time respectively average difference test accuracy best model selected auc optimal model mean std results suggest single direction reliance may serve good proxy hyperparameter selection early stopping work necessary evaluate whether results hold complicated datasets elationship dropout batch normalization dropout experiments reminiscent using dropout training time upon first inspection dropout may appear discourage networks reliance single directions srivastava however dropout encourages networks robust cumulative ablations dropout fraction used training discourage reliance single directions past point given enough capacity memorizing network could effectively guard dropout merely copying information stored given direction several directions however network encouraged make minimum number copies necessary guard dropout fraction used training case network would robust dropout long redundant directions simultaneously removed yet still highly reliant single directions past dropout fraction used training test whether intuition holds trained mlps mnist dropout probabilities corrupted unmodified labels consistent observation arpit found networks dropout trained randomized labels required epochs converge converged worse solutions higher dropout probabilities suggesting dropout indeed discourage memorization however networks trained corrupted unmodified labels exhibited minimal loss training accuracy single directions removed dropout fraction used training past point networks trained randomized labels much sensitive cumulative ablations trained unmodified labels fig interestingly networks trained unmodified labels different dropout fractions similarly robust cumulative ablations results suggest dropout may serve effective regularizer prevent memorization randomized labels prevent single directions past dropout fraction used training published conference paper iclr figure single direction reliance signal hyperparameter selection early stopping train blue test purple loss along normalized area cumulative ablation curve auc green course training mnist mlp loss cropped make divergence visible auc test loss convnet negatively correlated course training auc test accuracy positively corrleated across hyperparameter sweep hyperparameters repeats figure impact regularizers networks reliance upon single directions cumulative ablation curves mlps trained unmodified fully corrupted mnist dropout fractions colored dashed lines indicate number units ablated equivalent dropout fraction used training note curves networks trained corrupted mnist begin drop soon past dropout fraction trained cumulative ablation curves networks trained without batch normalization error bars represent standard deviation across model instances random orderings feature maps per model published conference paper iclr figure batch normalization decreases class selectivity increases mutual information distributions class selectivity mutual information networks trained blue without batch normalization purple distribution comprises model instances trained uncorrupted batch normalization contrast dropout batch normalization appear discourage reliance upon single directions test trained convolutional networks without batch normalization measured robustness cumulative ablation single directions networks trained batch normalization consistently substantially robust ablations trained without batch normalization fig result suggests addition reducing covariate shift proposed previously ioffe szegedy batch normalization also implicitly discourages reliance upon single directions elationship class selectivity importance results thus far suggest networks less reliant single directions exhibit better generalization performance result may appear light extensive past work neuroscience deep learning highlights single units feature maps selective particular features classes zeiler fergus coates zhou radford test whether class selectivity single directions related importance directions network output first asked whether batch normalization found discourage reliance single directions also influences distribution information class across single directions used selectivity index described see section quantify discriminability classes based activations single feature maps across networks trained without batch normalization interestingly found networks trained without batch normalization exhibited large fraction feature maps high class class selectivity feature maps networks trained batch normalization substantially lower fig contrast found batch normalization increases mutual information present feature maps fig results suggest batch normalization actually discourages presence feature maps concentrated class information rather encourages presence feature maps information multiple classes raising question whether highly selective feature maps actually beneficial next asked whether class selectivity given unit predictive impact network loss ablating said unit since experiments performed networks trained unmodified labels test loss used measure network impact mlps trained mnist found slight minor correlation spearman correlation unit class selectivity impact ablation many highly selective units minimal impact ablated fig analyzing convolutional networks trained imagenet found across layers ablation highly selective feature maps impactful ablation feature maps figs fact networks actually negative correlation class selectivity feature map importance spearman correlation fig test whether relationship calculated correlation class selectivity importance separately layer found vast majority negative correlation driven early dead feature maps feature maps activity would selectivity index published conference paper iclr convnet mnist mlp imagenet resnet figure selective directions similarly important impact ablation function class selectivity mnist mlp convolutional network imagenet resnet show regression lines layer separately layers later layers exhibited relationship class selectivity importance figs interestingly three networks ablations early layers impactful ablations later layers consistent theoretical observations raghu additionally performed experiments mutual information place class selectivity found qualitatively similar results appendix final test compared class selectivity filter weights metric found successful predictor feature map importance model pruning literature consistent previous observations found class selectivity largely unrelated filter weights anything two negatively correlated fig see appendix details taken together results suggest class selectivity good predictor importance imply class selectivity may actually detrimental network performance work necessary examine whether class feature selectivity harmful helpful network performance elated work much work directly inspired zhang replicate results using partially corrupted labels imagenet demonstrating memorizing networks reliant single directions also provide answer one questions posed empirical difference networks memorize generalize work also related work linking generalization sharpness minima hochreiter schmidhuber keskar neyshabur studies argue flat minima generalize better sharp minima though dinh recently found sharp minima also generalize well consistent work flat minima correspond solutions perturbations along single directions little impact network output another approach generalization contextualize information theory example achille soatto demonstrated networks trained randomized labels store published conference paper iclr information weights trained unmodfied labels notion also related tishby argues training networks proceed first loss minimization phase followed compression phase work consistent networks information stored weights less compressed networks reliant upon single directions compressed networks recently arpit analyzed variety properties networks trained partially corrupted labels relating performance capacity also demonstrated dropout properly tuned serve effective regularizer prevent memorization however found dropout may discourage memorization discourage reliance single directions past dropout probability found class selectivity poor predictor unit importance observation consistent variety recent studies neuroscience one line work benefits neural systems robust noise explored barrett montijn another set studies demonstrated presence neurons multiplexed information many stimuli shown task information decoded high accuracy populations neurons low individual class selectivity averbeck rigotti mante raposo morcos harvey zylberberg perturbation analyses performed variety purposes model pruning literature many studies removed units goal generating smaller models similar performance anwar molchanov recent work explored methods discovering maximally important directions raghu variety studies within deep learning highlighted single units selective features classes zeiler fergus coates zhou radford agrawal additionally agrawal analyzed minimum number sufficient feature maps sorted measure selectivity achieve given accuracy however none studies tested relationship unit class selectivity information content necessity network output bau quantified related metric concept selectivity across layers networks finding units get depth consistent observations regarding class selectivity see appendix however also observed correlation number units performance dataset across networks architectures difficult compare results directly data used substantially different method evaluating selectivity nevertheless note bau measured absolute number units across networks different total numbers units depths relationship number units network performance may therefore arise result larger number total units fixed fraction units increased depth observed selectivity increases depth iscussion future work work taken empirical approach understand differentiates neural networks generalize experiments demonstrate generalization capability related network reliance single directions networks trained corrupted uncorrupted data course training single network also show batch normalization highly successful regularizer seems implicitly discourage reliance single directions one clear extension work use observations construct regularizer directly penalizes reliance single directions happens obvious candidate regularize single direction reliance dropout variants shown appear regularize single direction reliance past dropout fraction used training section interestingly results suggest one able predict network generalization performance without inspecting validation test set observation could used several interesting ways first situations labeled training data sparse testing networks reliance single directions may provide mechanism assess generalization performance without published conference paper iclr ing training data used validation set second using computationally cheap empirical measures single direction reliance evaluating performance single ablation point sparsely sampling ablation curve metric could used signal hyperparameter selection shown metric viable simple datasets section work necessary evaluate viability complicated datasets another interesting direction research would evaluate relationship single direction reliance generalization performance across different generalization regimes work evaluate generalization train test data drawn distribution stringent form generalization one test set drawn unique overlapping distribution train set extent single direction reliance depends overlap train test distributions also worth exploring future research work makes potentially surprising observation role individually selective units dnns found class selectivity single directions largely uncorrelated ultimate importance network output also batch normalization decreases class selectivity individual feature maps result suggests highly class selective units may actually harmful network performance addition implies methods understanding neural networks based analyzing highly selective single units finding optimal inputs single units activation maximization erhan may misleading importantly measured feature selectivity unclear whether results generalize featureselective directions work necessary clarify points acknowledgments would like thank chiyuan zhang ben poole sam ritter avraham ruderman adam santoro critical feedback helpful discussions eferences alessandro achille stefano soatto emergence invariance disentangling deep representations url http pulkit agrawal ross girshick jitendra malik analyzing performance multilayer neural networks object recognition eccv guillaume alain yoshua bengio understanding intermediate layers using linear classifier probes url http sajid anwar kyuyeon hwang wonyong sung structured pruning deep convolutional neural networks devansh arpit nicolas ballas david krueger emmanuel bengio maxinder kanwal tegan maharaj asja fischer aaron courville yoshua bengio simon closer look memorization deep networks issn url http bruno averbeck peter latham alexandre pouget neural correlations population coding computation nature reviews neuroscience may issn doi url http david barrett sophie christian machens optimal compensation neuron loss elife issn doi david bau bolei zhou aditya khosla aude oliva antonio torralba network dissection quantifying interpretability deep visual representations doi url http olivier bousquet elisseeff stability generalization journal machine learning research jmlr mar issn url http published conference paper iclr kenneth britten michael shadlen william newsome anthony movshon analysis visual motion comparison neuronal psychophysical performance journal neuroscience adam coates andrej karpathy andrew emergence features unsupervised feature learning nips issn doi url http coateskarpathyng russell valois william yund norva hepler orientation direction selectivity cells macaque visual cortex vision research laurent dinh razvan pascanu samy bengio yoshua bengio sharp minima generalize deep nets gintare karolina dziugaite daniel roy computing nonvacuous generalization bounds deep stochastic neural networks many parameters training data url http dumitru erhan yoshua bengio aaron courville pascal vincent visualizing features deep network technical report david freedman john assad representation visual categories parietal cortex nature sep issn doi url http kaiming xiangyu zhang shaoqing ren jian sun deep residual learning image recognition issn doi sepp hochreiter schmidhuber flat minima neural comput sergey ioffe christian szegedy batch normalization accelerating deep network training reducing internal covariate shift arxiv url http nitish shirish keskar dheevatsa mudigere jorge nocedal mikahail smelyanskiy ping tak peter tang training deep learning generalization gap sharp minima iclr quoc marc aurelio ranzato rajat monga matthieu devin kai chen greg corrado jeff dean andrew building features using large scale unsupervised learning international conference machine learning issn doi hao asim kadav igor durdanovic hanan samet hans peter graf pruning filters efficient convnets valerio mante david sussillo krishna shenoy william newsome computation recurrent dynamics prefrontal cortex nature november issn doi url http pavlo molchanov stephen tyree tero karras timo aila jan kautz pruning convolutional neural networks resource efficient inference iclr jorrit montijn guido meijer carien lansink cyriel pennartz neural codes robust variability multidimensional coding perspective cell reports issn doi url http ari morcos christopher harvey variability population dynamics evidence accumulation cortex nature neuroscience october issn doi url http published conference paper iclr behnam neyshabur srinadh bhojanapalli david mcallester nathan srebro exploring generalization deep learning url https alec radford rafal jozefowicz ilya sutskever learning generate reviews discovering sentiment url http maithra raghu ben poole jon kleinberg surya ganguli jascha expressive power deep neural networks url http maithra raghu justin gilmer jason yosinski jascha svcca singular vector canonical correlation analysis deep understanding improvement url http david raposo matthew kaufman anne churchland neural population supports evolving demands nature neuroscience november issn doi url http mattia rigotti omri barak melissa warden wang nathaniel daw earl miller stefano fusi importance mixed selectivity complex cognitive tasks nature issn doi url http ravid naftali tishby opening black box deep neural networks via information arxiv url http samuel smith quoc understanding generalization stochastic gradient descent url http nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever ruslan salakhutdinov dropout simple way prevent neural networks overfitting journal machine learning research jmlr issn ashia wilson rebecca roelofs mitchell stern nathan srebro benjamin recht marginal value adaptive gradient methods machine learning url http yonghui mike schuster zhifeng chen quoc mohammad norouzi wolfgang macherey maxim krikun yuan cao qin gao klaus macherey google neural machine translation system bridging gap human machine translation arxiv preprint matthew zeiler rob fergus visualizing understanding convolutional networks computer visioneccv issn doi url http chiyuan zhang samy bengio moritz hardt benjamin recht oriol vinyals understanding deep learning requires rethinking generalization url http bolei zhou aditya khosla agata lapedriza aude oliva antonio torralba object detectors emerge deep scene cnns url http joel zylberberg untuned irrelevant role untuned neurons sensory information coding september url https published conference paper iclr ppendix omparison ablation methods remove influence given direction value fixed otherwise modified longer dependent input however choice fixed value substantial impact example value clamped one highly unlikely given distribution activations across training set network performance would likely suffer drastically compare two methods ablating directions ablating zero ablating empirical mean training set using convolutional networks trained performed cumulative ablations either ablating zero feature map mean means calculated independently element feature map found ablations zero significantly less damaging ablations feature map mean fig interestingly corresponds ablation strategies generally used model pruning literature anwar molchanov figure ablation zero ablation empirical feature map mean raining details mnist mlps class selectivity generalization early stopping dropout experiments layer contained units respectively networks trained epochs exception dropout networks trained epochs convnets convolutional networks trained epochs layer sizes strides respectively kernels hyperparameter sweep used section learning rate batch size evaluated using grid search imagenet resnet residual networks trained imagenet using distributed training workers batch size steps blocks structured follows stride filter sizes output channels training partially corrupted labels use data published conference paper iclr figure class selectivity increases depth class selectivity distributions function depth imagenet figure class selectivity uncorrelated relationship class selectivity filter weights imagenet tation would dramatically increasing effective training set size hence prevented memorization epth dependence class selectivity evaluate distribution class selectivity function depth networks trained fig imagenet fig selectivity increased function depth result consistent bau show increases depth also consistent alain bengio show depth increases linear decodability class information though evaluate linear decodability based entire layer rather single unit elationship class selectivity filter weight norm importantly results lack relationship class selectivity importance suggest directions less important network output suggest directions predictable merely suggest class selectivity good predictor importance final test compared class selectivity published conference paper iclr filter weights metric found strongly correlated impact removing filter model pruning literature since filter weights predictive impact feature map removal class selectivity also good predictor two metrics correlated imagenet network found correlation filter weights class selectivity fig network found actually negative correlation fig elationship mutual information importance mnist mlp convnet imagenet resnet figure mutual information good predictor unit importance impact ablation function mutual information mnist mlp convolutional network imagenet resnet show regression lines layer separately examine whether mutual information contrast class selectivity highlights units information multiple classes good predictor importance performed experiments section mutual information place class selectivity found results little less consistent appears relationship early late layers mutual information generally poor predictor unit importance fig | 2 |
dec interactive visualization persistence modules michael matthew abstract goal work extend standard persistent homology pipeline exploratory data analysis persistence setting practical computationally efficient way end introduce rivet software tool visualization persistence modules present mathematical foundations tool rivet provides interactive visualization barcodes affine slices persistence module also computes visualizes dimension vector space bigraded betti numbers heart computational approach novel data structure based planar line arrangements perform fast queries find barcode slice present efficient algorithm constructing data structure establish bounds complexity contents introduction algebra preliminaries augmented arrangements persistence modules querying augmented arrangement computing arrangement computing barcode templates cost computing storing augmented arrangement speeding computation augmented arrangement preliminary runtime results conclusion appendix references notation index columbia university new york usa mlesnick olaf college northfield usa introduction overview topological data analysis tda relatively new branch statistics whose goal apply topology develop tools studying global geometric features data persistent homology one central tools tda provides invariants data called barcodes associating data filtered topological space applying standard topological algebraic techniques last years persistent homology widely applied study scientific data subject extensive theoretical work many data sets interest point cloud data noise density single filtered space rich enough invariant encode structure interest data motivates consideration multidimensional persistent homology basic form associates data topological space simultaneously equipped two filtrations persistent homology yields algebraic invariants data far complex setting new methodology thus required working invariants practice tda community widely appreciated need practical data analysis tools handling multidimensional persistent homology however whereas community quick develop fast algorithms good publicly available software persistent homology comparatively slow extend setting indeed date best knowledge publicly available software extends usual persistent homology pipeline exploratory data analysis multidimensional persistence work seeks address gap case persistence building ideas presented introduce practical tool working persistent homology exploratory data analysis applications develop mathematical algorithmic foundations tool tool used much way persistent homology used tda offers user significantly information flexibility standard persistent homology tool call rivet rank invariant visualization exploration tool allows user dynamically navigate collection persistence barcodes derived persistence module contrast previous visualization tools persistent homology presented static displays invariants considered paper larger complex standard persistence invariants essential user nice way browsing invariant computer screen expect tda moves towards use richer invariants practical applications interactive visualization paradigms play increasingly prominent role tda workflow possible extend approach persistence tational cost however number practical challenges least designing implementing suitable graphical user interface since already plenty keep busy restrict attention case paper remainder section review multidimensional persistent homology introduce rivet visualization paradigm provide overview rivet mathematical computational underpinnings given length paper expect readers content limit first reading introduction invite begin way availability rivet software plan make rivet software publicly available http within next months meantime demo rivet accessed website multidimensional filtrations persistence modules start introduction rivet defining multidimensional filtrations persistence modules reviewing standard persistent homology pipeline tda throughout freely use basic language category theory accessible introduction language context persistence theory found notation categories let denote category whose objects functors whose objects natural transformations poset category functor obj let obj let denote image unique morphism homc let simp denote category simplicial complexes simplicial maps fixed field let vect denote category spaces linear maps define partial order taking let denote corresponding poset category set let denote set finite multidimensional filtrations define filtration functor simp inclusion usually refer filtration bifiltration filtrations basic topological objects study multidimensional persistence computational setting course work filtrations specified finite amount data let introduce language notation filtrations say filtration stabilizes exists whenever write fmax say simplex fmax appears finite number times finite set appears finite number times minimal unique denote call set grades appearance say filtration finite stabilizes fmax finite simplex fmax appears finite number times finite define size computational setting represent finite filtration memory storing simplicial complex fmax along set fmax multidimensional persistence modules define persistence module functor vect say pointwise finite dimensional dim explain section category vectr persistence modules isomorphic category modules suitable rings view may define presentations persistence modules see section details let simp vect denote ith simplicial homology functor coefficients finite filtration exists finite presentation implies particular see section details presentations barcodes persistence modules shows persistent homology modules decompose essentially unique way indecomposable summands isomorphism classes summands parameterized intervals thus may associate persistence module multiset intervals records isomorphism classes indecomposable summands call barcode general refer multiset intervals barcode barcode finitely presented persistence module consists finite set intervals form barcode form represented persistence diagram multiset points right side fig depicts barcode together corresponding persistence diagram figure point cloud circle left persistence barcode right obtained via construction barcode visualized either directly plotting interval green bars oriented vertically via persistence diagram green dots right single long interval barcode encodes presence cycle point cloud persistence barcodes data standard persistent homology pipeline data analysis associates barcode data set regard interval barcode topological feature data interpret length interval measure robustness feature pipeline constructing proceeds three steps associate data set finite filtration apply obtain finitely presented persistence module take pipeline construction barcodes data quite flexible principle work data set kind may consider number choices filtration different choices topologically encode different aspects structure data barcodes readily computed practice see section details persistence barcodes finite metric spaces one standard choice tda pipeline let finite metric space let rips filtration defined follows rips maximal simplicial complex pairs rips empty simplicial complex informally long intervals barcodes correspond cycles data see fig illustration stability following result shows barcode invariants finite metric spaces robust certain perturbations data theorem stability barcodes finite metric spaces finite metric spaces dgh dgh denotes distance denotes bottleneck distance barcodes see definitions see also generalization theorem compact metric spaces multidimensional persistent homology many cases interest filtration sufficient capture structure interest data cases naturally led associate data filtration case applying homology field coefficients filtration yields persistence module question arises whether associate filtration generalization barcode way useful data analysis follows motivate study multidimensional persistence describing one natural way bifiltrations arise study finite metric spaces ways bifiltrations arise tda applications discussed example discuss algebraic difficulties involved defining multidimensional generalization barcode bifiltrations finite metric spaces spite theorem tells barcodes rips finite metric space well behaved certain sense invariants couple important limitations first highly unstable addition removal outliers see fig illustration second relatedly exhibits density barcodes insensitive interesting structure high density regions see fig address issues proposed associate bifiltration study persistent homology bifiltration describe proposal depends choice bandwidth parameter simple variant best knowledge appeared elsewhere construction bifiltration proposed depends choice codensity function function whose value high dense points low outliers example proposes take neighbors density function figure barcodes unstable respect addition outliers insensitive interesting structure high density regions data thus though point clouds share densely sampled circle differ addition outliers quite different one another long interval appearing contrast point cloud contains densely sampled circle longest intervals similar length general choice density function depends choice bandwidth parameter given may define filtration rips taking rips rips rips rips thus collection simplicial complexes together inclusions yields functor rips rop simp rop naturally isomorphic example isomorphism sending object upon identification two categories may regard rips bifiltration note definition rips fact makes sense function discussed numerous possibilities interesting choices study data aside density estimates obtain variant rips first define graph subgraph rips consisting vertices whose degree least define maximal simplicial complex upon identification rop collection simplicial complexes defines bifiltration richer algebraic invariant rips particular sensitive interesting structure high density regions way rips barcodes persistence modules explain algebraic difficulties defining barcode persistence module closely following discussion case finitely presented persistence modules also decompose essentially unique way indecomposable summands follows easily standard formulation theorem however consequence standard quiver theory results described example set isomorphism classes indecomposable persistence modules contrast case extremely particular dimension vector space finitely presented indecomposable persistence module arbitrarily large thus principle could define barcode persistence module multiset isomorphism classes indecomposables case invariant typically useful data visualization exploration way barcode general seems purposes tda entirely satisfactory way defining barcode persistence module even consider invariants incomplete invariants take value two modules three simple invariants multidimensional persistence module nevertheless possible define simple useful computable invariants multidimensional persistence module tool rivet computes visualizes three invariants persistence module dimension function function maps dim fibered barcode collection barcodes affine slices multigraded betti numbers dimension function simple intuitive easily visualized invariant unstable provides information persistent features next two subsections introduce fibered barcode multigraded betti numbers example fully faithful functor category representations wild quiver vectr maps indecomposables indecomposables see also study possible isomorphism types multidimensional persistence module rank invariant fibered barcodes rank invariant let denote set pairs let denote integers let persistence module following define rank rank invariant rank rank using structure theorem persistence modules easy check persistence module rank determine persistence module rank encode isomorphism class see example example nevertheless rank invariant capture interesting first order information structure persistence module observed persistence module rank carries data family barcodes obtained restricting affine line call parameterized family barcodes fibered barcode particular persistence module fibered barcode family barcodes follows give definition fibered barcode persistence module space affine lines slope let denote collection unparameterized lines possibly infinite slope let denote collection unparameterized lines finite slope note submanifold boundary affine grassmannian dimension induced topology homeomorphic sense appropriate think family lines standard duality described section provides identification make extensive use duality later paper duality extend vertical lines natural way definition fibered barcode let denote associated full subcategory inclusion induces functor persistence module define define interval subset whenever also isomorphic structure theorem persistence modules yields definition barcode collection intervals define fibered barcode map sends line barcode proposition rank determine proof exists unique line clearly rank collection rank invariants rank determine noted persistence module rank determine result follows stability fibered barcode adapting arguments introduced recent note claudia landi establishes stable two senses results fact hold persistent homology modules arbitrary dimension however paper primarily interested case landi first stability result says line neither horizontal vertical map lipschitz continuous respect interleaving distance persistence modules bottleneck distance barcodes lipschitz constant depends slope slope grows larger slope deviates tending towards infinity slope approaches refer stability result external stability also presents internal stability result tells finitely presented map continuous suitable sense lines neither horizontal vertical fact result says something stronger put loosely closer slope line stable perturbations sum stability results tell persistence module line neither horizontal vertical barcode robust perturbations diagonal robust multigraded betti numbers next briefly introduce multigraded betti numbers called bigraded betti numbers case persistence see section precise definition examples finitely presented persistence module ith graded betti number function follows hilbert basis theorem classical theorem commutative algebra identically interest especially interested paper number generators relations respectively index minimal presentation see section analogous interpretation terms minimal resolution neither invariants determines invariants intimately connected connection case plays central role present work one main mathematical contributions project fast algorithm computing multigraded betti numbers persistence module algorithm described companion paper rivet visualization paradigm overview propose use fibered barcode exploratory data analysis much way barcodes typically used particular requires good way visualizing fibered barcode though discretizations fibered barcodes used shape matching applications best knowledge prior work visualization fibered barcodes work introduces paradigm called rivet interactive visualization fibered barcode persistence module presents efficient computational framework implementing paradigm paradigm also provides visualization dimension function bigraded betti numbers module visualizations three invariants complement work concert visualizations dimension function bigraded betti numbers provide coarse global view structure persistence module visualization fibered barcodes focuses single barcode time provides sharper local view give brief description rivet visualization paradigm additional details appendix given persistence module rivet allows user interactively select line via graphical interface software displays barcode user moves line clicking dragging displayed barcode updated real time rivet interface consists two main windows line selection window left persistence diagram window right fig shows screenshots rivet single choice four different lines line selection window given finitely presented persistence module line selection window plots rectangle containing union supports functions greyscale shading point rectangle represents dim unshaded dim larger dim corresponds darker shading scrolling mouse brings popup box gives precise value dim points supports marked green red yellow dots respectively area dot proportional corresponding function value dots translucent example overlaid red green dots appear brown intersection allows user read values betti numbers points support one functions figure screenshots rivet single choice persistence module four different lines rivet provides visualizations dimension vector space greyscale shading betti numbers green red yellow dots barcodes slices purple line selection window contains blue line slope endpoints boundary displayed region line represents choice intervals barcode displayed purple offset line perpendicular direction persistence diagram window persistence diagram window right persistence diagram representation displayed represent via persistence diagram need choose parameterization regard functor display choice parameterizations described appendix betti numbers multiplicity point persistence diagram indicated area corresponding dot interactivity user click drag blue line left window mouse thereby changing choice clicking blue line away endpoints dragging moves line direction perpendicular slope keeping slope constant clicking dragging endpoint line moves endpoint keeping fixed allows user change slope line line moves interval representation left window persistence diagram representation right window updated real time querying fibered barcode algorithm fast computation betti numbers persistence modules described performs efficient computation dimension function subroutine thus explaining computational underpinnings rivet focus rivet interactive visualization fibered barcode visualization paradigm needs update plot real time move line must able quickly access choice paper introduce efficient data structure augmented arrangement perform fast queries determine present theorem guarantees query procedure correctly recovers describe efficient algorithm computing structure augmented arrangement consists line arrangement cell decomposition induced set intersecting lines together collection pairs stored call barcode template explain section defined terms barcode discrete persistence module derived queries briefly describe query barcodes details given section noted standard duality described section provides identification simplicity let restrict attention generic case lies general case similar treated section obtain pair push points pair onto line moving upwards rightwards plane along horizontal vertical lines gives pair points pushl pushl take pushl theorem central mathematical result underlying rivet paradigm tells pushl pushl see fig illustration thus obtain barcode suffices identify cell compute pushl pushl figure illustration recover barcode barcode template pushing points pair barcode template onto example consists two disjoint intervals complexity results computing storing querying augmented arrangement computed computing via query far efficient computing scratch typical persistence modules arising data query performed real time desired time cost computing storing reasonable following two theorems provide theoretical basis claims persistence module let supp let number unique coordinates respectively points supp supp call coarseness theorem computational cost querying augmented arrangement lying query time log denotes number intervals lines query time log arbitrarily small perturbation theorem bifiltration size size algorithm computes using log elementary operations storage prove theorem section theorem section keep exposition brief introduction assumed statement theorem persistent homology module bifiltration however algorithm computing augmented arrangements handle purely algebraic input see section give general complexity bounds algebraic setting section coarsening interpretation theorem fixed choice size time required compute via application standard persistence algorithm thus theorem indicates one would expect computation storage expensive computation storage fixed worst case thus worst case bounds size time compute grow like surface may appear problematic practical applications however explain section always employ simple coarsening procedure approximate module small constant say encodes exactly view landi external stability result encodes approximately details coarsening given section computation augmented arrangement algorithm computing decouples three main parts computing constructing line arrangement computing barcode template next say words computing bigraded betti numbers noted one main mathematical contributions underlying rivet fast algorithm computing bigraded betti numbers persistence module rivet provide visualization betti numbers also makes essential use betti numbers constructing persistent homology module bifiltration simplices algorithm computes bigraded betti numbers time see section section paper present preliminary experimental results performance algorithm computing bigraded betti numbers indicate cost algorithm reasonable practice computing line arrangement second phase computation constructs line arrangement underlying line arrangements object intense study computational geometers decades machinery constructing working line arrangements practice algorithms constructing querying leverage machinery see section section computing barcode templates third phase computation computes barcode templates stored noted defined terms barcode certain persistence module compute need compute expensive part computation theory practice section introduce core algorithm based vineyard algorithm updating persistent homology computations section present modification algorithm much faster practice fact explained section algorithm computing barcode templates embarrassingly parallelizable computational experiments section presents preliminary results performance algorithm computing augmented arrangements explained present implementation rivet yet fully optimized timing results regarded loose upper bounds achieved using algorithms paper still results demonstrate even current code computation augmented arrangement bifiltration containing millions simplices feasible standard personal computer provided employ modest coarsening thus current implementation already performs well enough used analysis modestly sized real world data sets implementation work including introduction parallelization expect rivet scale well enough useful many settings persistence currently used exploratory data analysis outline conclude introduction outline remainder paper section reviews basic algebraic facts persistence modules minimal presentations graded betti numbers also discuss connection persistence modules discretizations section defines augmented arrangement persistence module sections give main result querying barcodes theorem section describe stored memory apply theorem give algorithm querying remaining sections introduce algorithm computing first section specifies persistence modules represented input algorithm explains algorithm computing section explains core algorithm computing barcode templates completes specification algorithm computing basic form section analyzes time space complexity algorithm computing section describes several practical strategies speed computation section presents preliminary timing results computation appendix expands introduction rivet interface given section providing additional details algebra preliminaries section present basic algebraic definitions facts need define study augmented arrangements persistence modules description persistence modules section defined persistence module object functor category vectr give description vectr ring let ring analogue usual polynomial ring variables exponents indeterminates allowed take arbitrary rather values example element formally defined monoid ring monoid let denote monomial xann let ideal generated set since field subring comes naturally equipped structure space define direct sum decomposition space action satisfies form category whose morphisms module homomorphisms obvious isomorphism vectr category may identify two categories henceforth refer persistence modules short remark rule familiar definitions constructions modules make sense category vectr example reader may check define submodules quotients direct sums tensor products resolutions tor functors vectr next explain also define free presentations free presentations sets define set pair grw set function grw formally may regard set pairs grw sometimes make use representation often abuse notation write mean set also clear context abbreviate grw say subset homogeneous clearly may regard set shifts modules define example obtained shifting vector spaces one left one free usual notion free module extends setting follows set let free identify set generators free obvious way free free set equivalently define free satisfies certain universal property see homogeneous subset free let hyi denote submodule generated matrix representations morphisms free modules let finite graded sets ordered underlying sets morphism free free represent matrix coefficients define unique solution xgr free free projection define presentations presentation pair set free homogeneous free denote presentation exists presentation finite say finitely presented note inclusion free induces morphism free free denote morphism example consider persistence modules induced linear maps equal ranks hence isomorphic however rank rank shows rank invariant completely determine isomorphism type persistence module minimal presentations let say presentation minimal hyi free ker free following proposition variant lemma proved way makes clear minimal presentations indeed minimal reasonable sense proposition finite presentation minimal descends minimal set generators coker minimal set generators hyi remark follows immediately proposition every finitely presented persistence module minimal presentation graded betti numbers persistence modules define dimf dimension function dimf dim define dimf tori functions called graded betti numbers betti numbers multigraded defined analogously discussed many places study augmented arrangements fibered barcodes need consider omit straightforward proof following result proposition minimal presentation example presentations modules given example minimal using easy see otherwise otherwise otherwise example otherwise otherwise otherwise grades influence persistence module let lemma finitely presented isomorphism proof let minimal presentation let free map induced inclusion free clearly isomorphic using proposition easy see map free isomorphism sending hyia isomorphically hyib hence isomorphism since isomorphic isomorphism well continuous extensions discrete persistence modules computational setting persistence modules encounter always finitely presented turns finitely presented persistence modules sense essentially discrete explain discrete persistence module functor vect poset category let denote ordinary polynomial ring variables analogy case regard discrete persistence module obvious way basic definitions machinery described persistence modules defined discrete persistence modules essentially way particular may define betti numbers discrete persistence module grid functions define grid function given functions lim lim define flg maximum element ordered partial order grid let flg given flg max grid function define flg flg flgn continuous extensions discrete persistence modules grid define functor vectz vectr follows persistence module max flg max flg action morphisms obvious one say continuous extension along proposition finitely presented continuous extension finitely generated discrete persistence module along grid proof let grid supp supp regard functor obvious way using lemma easy check continuous extension along finitely generated finite presentation induces one size betti numbers continuous extensions proposition suppose continuous extension along injective grid otherwise rivet exploits proposition compute betti numbers finitely presented persistence modules appealing local formulae betti numbers persistence modules see section proof proposition let free resolution easy see preserves exactness free resolution write definition dimf ith homology module following chain complex two functors vectr vectr acting respectively objects action morphisms defined obvious way note naturally isomorphic thus chain complex isomorphic chain complex map induced since continuous extension along clear hence claimed remains consider case let denote maximal homogeneous ideal dimf ith homology module chain complex map induced way defined functor clear isomorphisms fzi gia sending jfzi isomorphically igia following diagram commutes taking quotients obtain commutative diagram follows kzi hai desired barcodes discrete persistence modules discussed barcodes rindexed persistence modules section structure theorem tells barcode discrete persistence module also well defined provided dim less finitely generated finite multiset intervals barcodes continuous extension omit easy proof following proposition finitely generated discrete persistence module continuous extension along define remark already noted section poset category corresponding line isomorphic hence adapting definitions given setting define grid function function flg functor vectz vectl case say vect continuous extension persistence module clearly proposition also holds continuous extensions setting section use proposition setting prove main result queries augmented arrangements augmented arrangements persistence modules section define augmented arrangement associated finitely presented persistence module first section define line arrangement associated next section present characterization finally using characterization section define barcode template stored augmented arrangement defined arrangement together additional data definition let supp supp keep exposition simple assume element using shift construction described section always translate indices assumption holds loss generality assumption duality recall definitions section mentioned standard duality gives parameterization explain define dual transforms follows duality extend naturally vertical lines lines figure illustration duality following lemma whose proof omit illustrated fig point line lemma transforms inverses preserve incidence sense line arrangements cell topological space homeomorphic define cell complex decomposition finite number cells topological boundary cell lies union cells lower dimension standard topology cell cell complex dimension according definition cell complex cells necessarily unbounded line arrangement mean cell complex induced set lines cell complex consists union lines together line definition finite subset let lub least upper bound given lub max max example lub say pair distinct elements weakly incomparable one following true incomparable respect partial order share either first second coordinate call anchor lub weakly incomparable define line arrangement induced set lines anchor view lemma given contains anchor clear completely determined note two anchors intersect exists containing line exists comparable distinct size bound number cells dimension section let number unique coordinates respectively points clearly number anchors bounded hence number lines also bounded precise bounds number vertices edges faces arbitrary line arrangement well known computed simple counting arguments bounds tell number vertices edges faces greater characterization next give alternate description first define set crit critical lines denotes topological interior set affine lines positive finite slope push map note partial order restricts total order extends total order taking define push map pushl taking pushl min note pushl horizontal vertical pushl either see fig pushl pushl pushl pushl figure illustration push map lines positive finite slope continuity push maps maps pushl induce map pusha defined pusha pushl recall consider topological space topology restriction topology affine grassmannian lines lemma pusha continuous proof note pusha unique intersection result follows readily critical lines pushl induces totally ordered partition elements partition restrictions levelsets pushl total order pullback total order partition illustrated fig call regular open ball containing call critical regular let crit denote set critical lines theorem characterization exactly crit figure totally ordered partition ith element partition labeled sil lub lub figure illustration geometric idea behind theorem line lying lub whereas line lying lub thus line passing lub critical proof view description given proving proposition amounts showing critical contains anchor suppose contains anchor let weakly incomparable lub must pushl pushl easy see find arbitrarily small perturbation either pushl pushl pushl pushl thus critical see fig illustration case incomparable prove converse assume contain anchor consider distinct pushl pushl note must lie either horizontal line vertical line otherwise would incomparable would pushl pushl lub assume without loss generality lie horizontal line must also pushl pushl lies since lub anchor however contain anchor must pushl sufficiently small perturbation also intersect point right thus neighborhood push push fact since finite choose single neighborhood pushl pushl lemma choosing smaller necessary may assume pushl pushl thus partition independent choice moreover lemma total order also independent choice therefore regular corollary duals contained proof connected open follows theorem remark fact corollary strengthened show duals lie cell however need stronger result barcode templates using corollary define barcode templates stored complete definition augmented arrangement corollary associate totally ordered partition let sie denote ith element partition persistence module next use define discrete persistence module take assume first define map lub sie lub define call set template points cell note restriction injection let denote poset category positive integers whenever induces functor also denote finally functor vect extends functor vect persistence module taking mze whenever definition barcode templates clearly barcode easy see consists intervals let write define collection pairs points follows completes definition remark completely determined fibered barcode set indeed completely determined using proposition easy see completely determined main result next section theorem shows conversely completely determines simple way querying augmented arrangement previous section defined augmented arrangement finitely presented persistence module explain encodes fibered barcode given explain recover main result section theorem basis algorithm querying describe computational details query algorithm section recall procedure querying barcode discussed case generic lines section generally querying barcode involves two steps first choose coface cell containing second obtain intervals pairs pushing points pair onto line via map pushl section describe detail first step selecting coface selecting coface choose coface follows exists whose closure contains take horizontal cofaces cell containing ordered vertically take bottom coface note one coface unless contains anchor vertical say line let line arrangement maximum slope amongst slope less equal unique line exists several lines take one largest exists contains unique unbounded take lying directly line exist take bottom unbounded since assume cell uniquely defined selection cofaces several lines illustrated fig figure three anchors drawn black dots corresponding line arrangement shown line dual point corresponding chosen section shown color query theorem main mathematical result underlying rivet theorem querying augmented arrangement line chosen section barcode obtained restricting pushl pushl pushl pushl note lies pushl pushl theorem statement simplifies general however possible pushl pushl proof theorem prove result case proof horizontal vertical similar easier left reader result holds trivially assume let coface cell containing keep notation simple write push pushl keeping remark mind define grid first define restriction taking push note choose arbitrary extension grid proposition finish proof suffices show continuous extension along exists isomorphism given let max flg note mze note also define maps mtl consider separately three cases recall definition immediately lemma note push minimal element respect partial order hence mtl thus necessarily take isomorphism mtl zero map definition give push push implies thus isomorphism lemma since may regard map isomorphism mtl lub chain inequalities lub push lub gives fact lemma isomorphism interpreted isomorphism mtl defined isomorphisms mtl clearly isomorphisms commute internal maps define isomorphism desired computational details queries next explain computational details storing querying also give complexity analysis query algorithm proving theorem dcel representation noted introduction represent line arrangement underlies using dcel data structure standard data structure representing line arrangements computational geometry dcel consists collection vertices edges together collection pointers specifying cells fit together form decomposition representing barcode templates represent augmented arrangement store barcode template dcel representation recall multiset means pair may appear multiple times thus store list triples gives multiplicity query algorithm given line query proceeds two steps first step performs search specified section selected obtain applying pushl endpoints pair let describe algorithm find detail case vertical suffices find cell containing general problem finding cell line arrangement containing given query point known point location problem well studied problem computational geometry need perform many point location queries arrangement need perform queries real time standard practice precompute data structure point locations performed efficiently approach take number vertices arrangement number different strategies time log compute data structure size perform point location query time log vertices computing data structure takes time log data structure size point location query takes time log case vertical defined need take different approach find precompute separate simpler search data structure handle case let denote set lines line slope lying compute array contains pointer rightmost unbounded contained sorted according slope given computing array takes time number anchor lines array computed vertical line find appropriate via binary search array takes log time ready prove result introduction cost querying proof theorem discussion clear puted appropriate data structures finding cell takes log time evaluation pushl takes constant time computing takes total time thus total time query log gives theorem hand may arbitrarily small perturbation theorem follows computing arrangement turn specification algorithm computing first sections specify algebraic objects serve input algorithm explain objects arise bifiltrations free implicit representations persistence modules input algorithm define set integers let denote empty set define free implicit representation persistence module quadruple gri function matrices coefficients respective dimensions note either may empty matrix let denote constant function mapping greatest lower bound defining ordered sets gri require notation section matrix representations respectively maps free free free free ker refer defined dimensions write note presentation degenerate case empty matrix algorithm computing augmented arrangement finitely presented persistence module takes input storing free implicit representation memory store matrices data structure used standard persistent homology algorithm columns stored array size ith column stored list position array also store grj position array motivation free implicit representations finite filtrations interested studying ith persistent homology module finite bifiltration arising data choice represent persistence module via motivated fact practice one typically ready access contrast generally direct access presentation outset known algorithms computing one computationally expensive describe detail persistent homology modules arise practice chain complexes recall section filtration functor simp map inclusion whenever gives rise chain complex persistence modules given define taking vector space generated taking map induced inclusion morphism induced boundary maps simplicial complexes note ker filtrations explain arise finite bifiltrations helpful first consider special case following define filtration finite filtration fmax section denotes set grades appearance finite say bifiltrations arising tda applications often always example finite metric space function bifiltration rips section filtration section generally easy see free obvious isomorphism free set given fmax thus choosing order boundary map represented respect matrix coefficients field explained section exactly usual matrix representation boundary map fmax free implicit representations case ordered ngraded sets matrices determine chain complex hence homology modules isomorphism fact total order grfj free implicit representations case multicritical filtration modules free nevertheless explained easy construction generalizing construction given setting letting fmax dimensions satisfy following bounds bifiltration see details computation free implicit representation bifiltration explained section store finite simplicial bifiltration memory simplicial complex together list grades appearance every simplex let finite bifiltration size hard show given input compute described time log one homology index time homology indices standard algorithms computing persistence barcodes filtration compute specified integer single pass one interested barcodes homology index efficient computations one index time computations share computational work contrast algorithm described paper implemented present version rivet computes finite bifiltration single choice approach allows save computational effort interested single homology module seems natural approach working filtrations said within rivet framework bifiltration one also handle persistence modules specified integer single computation betti numbers first step computation compute supp supp since rivet also visualizes betti numbers directly choose compute fully computing case know algorithm computing significantly efficient algorithm fully computing computing betti numbers persistence modules define persistence module persistence module case functions take values companion article show fully compute given algorithm runs time runs time dimensions one way compute bigraded betti numbers quite standard computational commutative algebra compute free resolution however gives need particular application instead following route algorithm computes betti numbers via carefully scheduled column reductions matrices taking advantage characterization betti numbers terms homology kozul complexes natural way compute rather compute single augmented line arrangement labeling intervals discrete barcode homology degree query provides homology degree interval labeled variant computed using essentially algorithm presented paper computation single augmented arrangement compute need first replace chain complex free persistence modules noted done via mapping telescope construction though may significantly increase size computing betti numbers persistence modules fact algorithm computing betti numbers discrete setting also used compute bigraded betti numbers finitely presented persistence module indeed explain persistence module induces discrete injective grid continuous extension along matrices two free implicit representations given compute multigraded betti numbers using algorithm deduce multigraded betti numbers proposition construction grid simple let respectively denote ordered set unique elements let let bijection sending ith element define analogously choose arbitrary extension define function grj leave reader easy check continuous extension along computation storage anchors template points recall section anchor least upper bound weakly incomparable pair points set anchors determines line arrangement section let denote set anchors compute line arrangement need first compute list anchors elements moreover algorithm computing barcode templates described section requires represent set template points using certain sparse matrix data structure easy see see convenient compute list anchors sparse matrix representation time section specify data structure describe algorithm simultaneously computing data structure list anchors sparse matrix data structure note given easy see also thus store suffices store maps store two arrays size store sparse matrix tptsmat size triple tptsmat data structure henceforth keep notation simple assume identity maps respectively let describe tptsmat detail example tptsmat shown fig element represented tptsmat quintuple quintuple pointers possibly null points element immediately left points element immediately objects lists used computation barcode templates initially lists empty discuss section data structure tptsmat also contains array rows pointers length ith entry rows points rightmost element rows figure example tptsmat element represented square shaded squares represent elements squares solid borders represent anchors entry contains pointers next entries left pointers illustrated arrows lists stored shown computation tptsmat anchors algorithm computing betti numbers given computes bigrade iterating lexicographical order iterate easy also compute tptsmat list anchors let explain detail upon initialization anchors empty tptsmat contains entries pointers rows null create temporary array columns pointers length pointer initially null betti numbers computed add list anchors add quintuple rows columns tptsmat set rows columns point updates columns rows ensure rows always points rightmost entry added tptsmat thus far columns always points topmost entry added tptsmat thus far remains explain determine whether note least one following two conditions holds visit rows columns null either rows columns null using fact check whether constant time beyond time required compute algorithm computing tptsmat anchors takes time building line arrangement recall anchors correspond duality lines arrangement thus list anchors determined ready build dcel representation implementation rivet uses algorithm constructs dcel representation line arrangement lines vertices time log since arrangement contains lines vertices algorithm requires log elementary operations explained section number cells size dcel representation arrangement order number cells arrangement size dcel representation also remark log term bound theorem arises use algorithm asymptotically faster algorithms constructing line arrangements would give slightly smaller term bound theorem fact remove log factor however algorithm relatively simple performs well practice standard choice lies line dual anchor dcel representation store pointer entry tptsmat corresponding numerical considerations line arrangement computations many computations computational geometry notoriously sensitive numerical errors arise arithmetic much effort invested development smart arithmetic models computational geometry allow avoid errors inexact arithmetic produce without giving much computational efficiency exact arithmetic generally far computationally expensive arithmetic models typically take hybrid approach relying arithmetic cases resulting errors certain cause problems switching exact arithmetic calculations errors could problematic implementation rivet relies simple hybrid model specially tailored problem hand computing barcode templates found anchors constructed line arrangement ready complete computation computing barcode templates section describes core algorithm section describe refinement algorithm performs significantly faster practice input algorithm consists three parts represented way described section sparse matrix representation tptsmat set template points dcel representation line arrangement recall given input algorithm computing computation tptsmat already described recall section thus compute suffices compute pair essentially algorithm though turns unnecessary explicitly store either point computation note thus may assume trimming free implicit representation let box lub say trimmed box preliminary step preparation computation barcode templates already trimmed replace smaller trimmed possible work directly untrimmed compute barcode templates efficient work trimmed one addition assuming trimmed allows simplifications description algorithm let box unique map box define grj define submatrix whose columns correspond elements define submatrix whose rows columns correspond elements respectively let proposition proof associated respective chain complexes free modules ker ker definition since trimming obvious maps making following diagram commute maps induce map easy see isomorphism box finish proof remains check also isomorphism box box unique element box minimizing distance note commutativity suffices see isomorphism isomorphism lemma gives isomorphism since box easy see directly isomorphism clearly trimmed given tptsmat representation compute time number entries henceforth assume given input algorithm trimmed computation persistence barcodes prepare description algorithm computing barcode templates begin preliminaries computation persistence barcodes large growing body work topic see recent overview emphasis publicly available software restrict attention needed explain algorithm standard algorithm computing persistence barcodes introduced building ideas see also succinct description algorithm together implementation details algorithm takes input persistence module returns course applications typically comes chain complex filtration simplices dimension ordered according grade appearance algorithm variant gaussian elimination performs column additions construct certain factorizations barcode read directly let explain detail drawing ideas introduced let matrix index column let denote maximum row index entry column say reduced whenever indices columns standard persistence algorithm yields decomposition matrix coefficients field reduced matrix matrix algorithm runs time define simply pair read explain suppose dimensions let denote column define pairs ess column matrix unique shown pairs ess independent choice theorem pairs ess vineyard updates barcode computations suppose matrix obtained transposing either two adjacent rows two adjacent columns introduces algorithm known vineyard algorithm updating obtain time algorithm essential subroutine algorithm computing barcode templates permutations free implicit representations mentioned standard persistence algorithm takes input persistence module reason need formula theorem reading barcode holds assumption suppose given either modify obtain grade functions read barcode decomposition answer question easy check following lemma holds lemma suppose persistence module dimensions permutations respectively corresponding permutation matrices also thus case finding grade functions amounts finding permutations applying corresponding permutations rows columns finding permutations sorting may use sorting algorithm find permutation puts list grj grj grj order function gri take advantage vineyard algorithm main algorithm want work sorting algorithm generates permutation product transpositions adjacent elements use well known algorithm yields minimum length product adjacent transpositions induced free implicit representations using lemma next show yields discrete persistence module introduced section nondecreasing thus compute computing lift maps function recall definition set template points section define lifte box taking lifte minimum element elsewhere denotes partial order order fig illustrate lifte lifte pair adjacent figure illustration lifte left lifte right two adjacent cells containing duals lines respectively black dots represent points note maps lifte lifte illustrated red arrows sample points purple dots shaded region figure subset box lifte lifte let orde denote unique bijection inverse restriction free implicit representation suppose dimensions let permutation grej orde lifte grj write let denote permutation matrices corresponding respectively let proposition proof hard check orde lifte orde lifte given result follows lemma general uniquely defined depends pair permutations unique sometimes write emphasize dependence say chosen valid reading let fje lifte grj orde grej write call template map note independent choice valid theorem proposition definition barcode template section following relationship pairs ess algorithm tells compute barcode template suffices compute template map along valid exactly algorithm give description algorithm deferring details later sections need compute every approach would computation scratch however better leveraging work done one cell expedite computation neighboring cell proceed follows let denote dual graph undirected graph vertex edge pair adjacent dual graph illustrated fig compute path visits least algorithm computing discussed section let adopt convention abbreviating expression form example write figure line arrangement grey together dual graph blue path might visit vertices order computed compute template map valid choice proceed order increasing store fji memory separately storing factors grj discuss data structures section thus compute fji compute grj initial cell chosen way allows simple combinatorial algorithm compute grj see section letting denote matrix representation thus obtained performing row column permutations given compute via application standard persistence algorithm compute update compute decomposition update let explain detail yet detail come later sections update update grj first update grj obtain lifti grj details computation given section second update obtain follows define ordi lifti grj note distinction grij former defined terms latter defined terms applying algorithm compute sequence transpositions adjacent elements composition transpositions take clearly grij ordi lifti grj valid note matrix representation compute exploit decomposition sequence transpositions provided algorithm apply algorithm repeatedly performing update transposition sequence note neither decomposition transpositions ever needs stored explicitly memory rather use transposition perform part update soon computed need store transposition remains explain compute grj update grj obtain lifti grj follows explain also fill details data structures used algorithm computing path first explain choose path let dual graph line arrangement compute first compute weight edge weight chosen estimate amount work algorithm must pass cells either direction defer details define compute edge weights section choice edge weights impacts choice hence speed algorithm computing barcode templates asymptotic complexity bounds algorithm independent choice edge weights take topmost points correspond pointline duality lines pass right points call path starting visiting every vertex valid path like choose valid path minimum length efficient algorithm computing path indeed expect problem instead compute path whose length approximately minimum let valid path minimum length straightforward compute valid path length length first compute minimum spanning tree via standard algorithm kruskal algorithm via search starting find valid path traverses edge twice since length length length length fact algorithm better approximation ratio known shows variant christofides algorithm traveling salesman problem metric graph yields valid path length length data structures completing specification algorithm computing barcode templates need describe data structures used internally algorithm persistent homology vineyard update data structures first mention algorithm uses data structures specified computing updating consist consists sparse store well several additional arrays aid performing persistence vineyard algorithms reading barcodes matrices since use data structures way described refer reader paper details array data structures also maintain arrays gradesj liftsj sigj siginvj length gradesj static array gradesj grj liftsj array pointers entries tptsmat computations cell complete liftsj lifti grj sigj siginvj remark note computations cell complete use liftsj sigj together perform constant time evaluations template map fji lifti grj together allows efficiently read using lists levsetj mentioned section store lists entry tptsmat corresponding specify lists store computations cell complete levsetj stores lifti grj levsetj empty see section algorithm uses lists levsetj efficiently perform required updates pass cell cell computations initial cell next describe detail computations performed initial cell building explanation section begin computations cell initialize levsetj grj note grj nonempty rightmost element horizontal line passing thus elements levsetj nonempty unique efficiently initialize lists levsetj use log time sweep algorithm described appendix add list levsetj also set liftsj next concatenate lists levsetj single list length increasing order set siginvj equal list given siginvj construct sigj obvious way time define permutation whose array representation sigj letting denote matrix representation use arrays compute using column sparse representation described allows implicit representations row permutations takes time already explained section apply standard persistence algorithm compute completes work done algorithm cell computations cell section outlined algorithm updating template map decomposition pass cell give detailed account algorithm filling details omitted earlier explained section update grj algorithm separately updates factors grj possible first completely update grj update via application insertion sort slightly efficient interleave updates two factors update value grj immediately perform transpositions necessitated update along corresponding updates approach take assume without loss generality lies shared boundary lies line dual anchor tptsmat provides constant time access element immediately left element immediately exist keep exposition simple assume exist cases either exist similar simpler note see fig recall lifti grj represented memory using data structures liftsj levsetj whereas represented using sigj siginvj perform required updates pass cell cell first iterate list levsetj decreasing order levsetj grj less equal lifti grj remove levsetj add beginning list levsetj set liftsj hand grj greater lifti grj perform updates liftsj levsetj levsetj value addition liftsj siginvj sigj apply insertion sort update sigj siginvj specifically compute sortoneelement sigj sortoneelement algorithm defined algorithm sortoneelement input liftsj siginvj liftsj siginvj whenever output updated sigj siginvj liftsj siginvj liftsj siginvj whenever correspondingly updated liftsj siginvj liftsj siginvj swap siginvj siginvj sigj siginvj sigj siginvj perform corresponding updates described section finished iterating list levsetj next iterate list levsetj decreasing order levsetj perform updates exactly elements levsetj one difference second coordinate grj greater must remove levsetj add beginning list levsetj set liftsj finished iterating list levsetj updates cell complete choosing edge weights seen section path depends choice weights edges explain choose compute weights explain section computing weights also first step two practical improvements algorithm practice cost algorithm computing barcode templates dominated cost updating average expect cost updating traverse edge roughly proportional total number transpositions performed thus case average number transpositions performed traverse independent choice path would reasonable take fact depend nevertheless give simple computable estimate independent choose estimate definition fact depends anchor line containing common boundary may write prepare definition introduce terminology also need section switches separations adjacent say box switch either lifte lifte lifte lifte lifte lifte lifte lifte similarly incomparable say separate either lifte lifte lifte lifte lifte lifte lifte lifte omit straightforward proof following lemma suppose pairs adjacent shared boundary pair lying anchor line switch switch separate separate view lemma anchor line say switch switch adjacent whose boundary lies analogously speak separating lemma switch anchor line incomparable proof suppose anchor switch exist fig exchanging necessary remark every time cross anchor line algorithm performs one insertionsort transposition pair grj grj switch pair grj grj separate algorithm may perform corresponding transposition crossing one may sometimes perform transposition even crossing direction reasonable estimate pair grj grj separate algorithm performs corresponding transposition roughly time definition anchor line finite set function define swl respectively sepl number unordered pairs switch respectively separate along motivated remark define swl swl sepl sepl computing weights weights computed using simplified version main algorithm computing barcode templates first choose path starting crossing every anchor line example choose path rightmost cells run variant algorithm computing barcode templates described using path place omitting steps involving matrices updates adjacent shared boundary anchor line compute pass cell cell explain works let section simplicity assume exist pair elements switch separate lifte lifte compute need consider pairs whose elements lie lists levsetj levsetj lines determine decomposition plane four quadrants whether switch separate completely determined quadrants contain see fig using observations easily extend update procedure described section compute weight cross cost computing storing augmented arrangement section prove theorem bounds cost computing storing recall theorem stated persistence modules arising ith persistent homology bifiltration using language may state result general algebraic form proposition let persistence module coarseness let dimensions letting size algorithm computes using log elementary operations requires storage see theorem follows proposition let multicritical bifiltration size recall section section using construction log time compute dimensions size augmented arrangement prove proposition first noted section dcel representation size store barcode template considering decomposition see persistence module dimensions hence proposition implies therefore representation memory total size claimed cost computing augmented arrangement turn proof proposition seen algorithm computing involves several row table lists data computed bound number elementary operation required sections paper details discussed bounds first four rows table explained earlier remains analyze cost algorithm computing barcode templates computation involves number steps whose individual time complexities list table table cost augmented arrangement data elem operations details set supp supp section template points stored tptsmat list anchors section arrangement constructed via algorithm log section data structures point location log barcode templates section log section bounds last four rows table double horizontal line either explained earlier clear discussion presented remainder section verify last four bounds cost updates levsetj liftsj notation section update lists levsetj proper values algorithm considers performing update element lists levsetj levsetj worst case cell elements consider total updates levsetj liftsj take constant time thus total amount work need cell path contains total work perform updates claimed bound total number transpositions establish next two bounds take advantage following result prove section proposition algorithm computing barcode templates performs total transpositions cost updates sigj siginvj cells cost updating sig proportional cost updating siginvj thus immediate proposition total cost updating arrays cost updates cells cells consider gives term transposition performed sigj call vineyard algorithm described section twice call vineyard algorithm takes time proposition total cost vineyard updates performed gives desired bound table cost barcode template data elem operations details trimming section path found via algorithm optimal path using kruskal mst algorithm log section levsetj liftsj sigj siginvj cell log section appendix section reading barcode template section section levsetj liftsj cells section sigj siginvj section section weights anchor lines section cost computing weights explained section compute edge weights using variant algorithm computing barcode templates using lemma checked computing edge weights takes time storage requirements proposition size algorithm computing requires least much storage algorithm computing betti numbers requires storage persistence algorithm vineyard updates algorithm requires storage kruskal algorithm constructing search data structures used queries also requires storage descriptions data structures used remaining parts algorithm clear steps algorithm computing require storage bound proposition storage requirements algorithm follows bounding total number transpositions required compute barcode templates complete proof proposition remains prove proposition end box let lift lifte leave reader proofs following two lemmas lemma line lifte following two conditions hold pushl minimum element pushl pushl pushl lemma lift following two conditions hold exists exist pair fig illustrates shape set lift described lemma figure shape set lift described lemma next lemma shows number anchor lines given pair points box switch separate two key step proof proposition lemma box incomparable one anchor line switch exists anchor line separate two anchor lines separate proof assume without loss generality since incomparable let lift lift lifte lifte following observations illustrated fig follow lemma nonempty element symmetrically nonempty element nonempty element symmetrically nonempty element clearly unique exist using lemma straightforward check lift lift one following true incomparable every element lift lift see fig illustration finish proof lemma consider seven cases explicitly describe lines either switch separate verification claimed behavior case uses lemma observations left reader fig illustrates case empty lifte lifte every therefore never switch separate nonempty empty lifte lifte every switches separations nonempty empty symmetric switches separations nonempty empty lifte lifte whenever lies lub lifte lifte whenever lies hence switch nonempty empty lifte lifte whenever lies lub lifte lifte whenever lies hence separate nonempty empty lub symmetric separate nonempty lifte lifte whenever lies lub lifte lifte whenever lies lub lifte lifte whenever lies hence separate figure illustration case case points lift lift drawn black dots observe separate lub lub anchor line either switch separate proof proposition fix let first note grj grj comparable pass cell cell algorithm computing barcode templates never performs transposition values siginvj initialization procedure cell described section appendix chooses siginvj grj grj sigj sigj since lifte lifte thus never need swap therefore pass cell cell algorithm performs transposition values siginvj grj grj either switch separate clearly number pairs grj grj either switch separate anchor line less total number pairs path constructed via minimum spanning tree construction section crosses anchor line times lemma pair component algorithm performs total transpositions pair siginvj hence total number transpositions performed algorithm altogether speeding computation augmented arrangement section describe several simple practical strategies speed runtime computation used together strategies allow compute augmented arrangements persistent homology modules much larger datasets would otherwise possible persistence computation scratch slow three options computing barcode template update decomposition involving transpositions fast practice update decomposition requiring many transpositions quite slow many transpositions required sometimes much faster simply recompute scratch using standard persistence algorithm setting practical performance algorithm greatly improved consecutive edge weight greater suitably chosen threshold simply compute scratch directly moreover obtain significant additional speedups avoiding computation full altogether cells explain first note obtain via need full pairs ess particular need algorithm computing barcode templates described section maintains full vineyard algorithm requires willing compute scratch necessary compute full suffices compute case even need compute cell since already done earlier step recent years several algorithms computing barcodes introduced much faster standard persistence algorithm example algorithms implemented software library phat given input algorithms compute compute full let restrict attention single algorithm say clear compress algorithm implemented phat compute three options available use clear compress algorithm nothing compute full scratch using standard persistence algorithm use vineyard updates option available chose option cell full computed clearly tradeoff options option much faster choosing option cell precludes use option cell choose three options cell formulate problem discrete optimization problem solved efficiently reduction problem estimating runtimes different options formulation problem requires first estimate respective runtimes options cell path describe simple strategy explain modify approach correct drawback strategy take otherwise take constant independent similarly take independent compute compute using option set runtime computation similarly compute compute full scratch take runtime set arbitrarily say compute perform several thousand random vineyard updates using timing data computations compute average runtime cvine however fast algorithms persistence computation readily adapted compute example explained ulrich bauer true twist variant standard persistence algorithm update corresponding transposition adjacent elements letting denote anchor line containing shared boundary recalling notation section take vine cvine swl sepl swl sepl motivation choice provided remark optimization problem decide options cell solve following optimization problem minimize subject clearly problem equivalent integer linear program ilp minimize subject using constraints eliminate variables ilp obtain equivalent ilp simpler set constraints minimize subject constraint matrix associated latter ilp standard form well known totally modular ilp totally unimodular constraint matrix always solved directly via linear programming relaxation often case ilp cast network flow problem case take advantage efficient specialized algorithms fact explained john carlsson simplified ilp cast problem finding minimum cut network first ilp cast independent set problem bipartite graph vertex weights graph complement independent set vertex cover vice versa latter problem turn equivalent vertex cover problem bipartite graph well known problem solved computing minimum cut flow network section dynamic updates estimates runtime cost mentioned drawback approach barcode template computation proposed may good estimates respective average costs option option estimates computed using little data one way correct first solve optimization problem first cells say path using solution compute barcode template cells record runtime computation cell next update value average run time vine computations performed using option thus far also update cvine analogous way use updated values input optimization problem next cells continue way estimates vine stabilized finally solve optimization problem cvine remaining cells use solution compute remaining barcode templates coarsening persistence modules seen size augmented arrangement depends quadratically coarseness computing requires elementary operations thus keep computations small typically want limit size coarsening module explain quite simple similar coarsening scheme mentioned let grid function defined section extends functor also denote persistence module let continuous extension view proposition coarseness grid controls coarseness module let denote multidimensional interleaving distance defined explained particularly metric persistence modules following proposition whose easy proof omit makes precise intuitive idea small amount coarsening leads small change persistence module proposition external stability theorem landi mentioned earlier section shows persistence modules close interleaving distance fibered barcodes close precise sense justifies use coarsening conjunction visualization paradigm coarsening free implicit representations explained section practice typically access via since algorithm computing augmented arrangement takes input compute want first construct let clg function takes minimal define clg clg leave proof following reader proposition thus obtain coarsening suffices simply coarsen grade functions parallelization problem computing barcode templates embarrassingly parallelizable one simple parallelization scheme given processors less equal number anchors choose anchor lines largest values lines divide polygonal cells disjoint interiors remaining anchor lines induce line arrangement processor run main serial algorithm described compute barcode templates need make one modification algorithm choosing path generally choose initial cell cell described section since may contained instead choose initial cell arbitrarily means use approach section initialize data structures sigj siginvj liftsj levsetj cell one way initialize data structures chose arbitrary affine line consider behavior map pushl using exact arithmetic necessary preliminary runtime results present runtimes computation augmented arrangements arising synthetic data emphasize computational results preliminary implementation rivet yet take advantage key optimizations first code produced results employs highly simplified variant scheme detailed section computing barcode templates variant chooses options edge crossing less efficient proposed section secondly code stores columns sparse matrices using linked lists known smarter choice data structure storing columns lead major speedups persistence computation finally mentioned section current implementation runs single processing core expect see substantial speedups parallelizing computation barcode templates proposed section computations run single slow mhz core server ram however computations reported fraction memory required example rivet used approximately ram largest computation noisy circle data data sets consider point clouds sampled noise annulus center point cloud fig specifically points data set sampled randomly thick annulus plane sampled randomly square containing annulus define codensity function taking equal number points within fixed distance construct bifiltration rips described section taking metric euclidean distance scale parameter complexes capped value slightly larger inner diameter annulus computing graded betti numbers table displays average runtimes computing graded betti numbers row gives averages three point clouds specified size example generated three point clouds points average number resulting bifiltrations computing homology required working bifiltration average size simplices building augmented arrangement table displays average runtimes seconds build row gives averages three point clouds specified size average runtimes computing four different coarsenings displayed table similarly table displays average runtimes seconds build table average runtimes computing bigraded betti numbers noisy circle data points simplices runtime sec table runtimes computing augmented arrangement homology noisy circle data runtimes seconds points simplices bins table runtimes computing augmented arrangement homology noisy circle data runtimes seconds points simplices bins conclusion paper introduced rivet practical tool visualization persistence modules rivet provides interactive visualization barcodes affine slices persistence module well visualizations dimension function bigraded betti numbers module presented mathematical theory visualization paradigm centered around novel data structure called augmented rangement also introduced analyzed algorithm computing augmented arrangements described several strategies improving runtime algorithm practice addition presented timing data preliminary experiments computation augmented arrangements though yet incorporate several key optimizations code results demonstrate current implementation already scales well enough used study bifitrations millions simplices implementation work expect rivet scale well enough used many settings persistence currently used exploratory data analysis several natural directions pursue beyond continuing improve implementation rivet would like apply rivet exploratory analysis scientific data develop statistical foundations data analysis methodology adapt rivet paradigm setting homology develop tool hierarchical clustering interactive visualization bidendrograms extend rivet methodology generalized persistence settings cosheaves vector spaces cosheaves persistence modules hope rivet prove useful addition existing arsenal tda tools regardless ultimately fares regard however feel broader program developing practical computational tools multidimensional persistence promising direction tda hope work draw attention possibilities believe room diverse set approaches acknowledgements paper benefited significantly conversations john carlsson pointline duality discrete optimization also thank ulrich bauer magnus botnan dmitriy morozov francesco vaccarino helpful discussions bulk work presented paper carried authors postdoctoral fellows institute mathematics applications funds provided national science foundation work completed mike visiting raul rabadan lab columbia university thanks everyone ima columbia support hospitality appendix details rivet interface expanding section provide detail rivet graphical interface discussed section module input rivet free implicit representation rivet uses choose bounds line selection window persistence diagram window explain let denote greatest lower bound least upper bound respectively order avoid discussion uninteresting edge cases assume choice bounds line selection window take lower left corner upper right corner line selection window default line selection window drawn scale toggle switch rescales normalizes window drawn square screen parameterization lines plotting persistence diagrams next explain given line rivet represents persistence diagram let first assume line selection window unnormalized treat case normalized end section translating indices necessary may assume without loss generality noted section plot persistence diagram need first choose parameterization choose unique isometry finite positive slope unique point intersection union portions coordinate axes line line intrinsic choice bounds line selection window one could instead take lower left upper right corners window greatest lower bound least upper bound respectively set however feel typical tda application extrinsic bounds line selection window proposed provide intuitive choice scale choice bounds persistence diagram window bounds persistence diagram window chosen statically depending choice rivet chooses viewable region persistence diagram representation points outside viewable region persistence diagram may choices contains intervals falls outside viewable region persistence diagram indeed coordinates points persistence diagram become huge finite slope approaches thus persistence diagrams include information top diagram found typical persistence diagram visualizations record points persistence diagram fall outside viewable region main square region persistence diagram two narrow horizontal strips separated dashed horizontal line upper strip labeled inf lower labeled inf higher strip plot point interval lower strip plot point interval right two horizontal strips number separated strip vertical dashed line upper number count intervals lower number count intervals persistence diagrams rescaling choose normalize line selection window rivet also normalizes persistence diagrams correspondingly computes respective affine normalizations normalization rivet chooses parameterizations lines computes bounds persistence diagram window exactly described unnormalized case taking input computing lists levsetj cell mentioned section efficiently compute lists levsetj cell use sweep algorithm section keep notation simple assume identity maps respectively assume start increasing respect colexicographical order case lemma apply sorting algorithm modify assumption hold sorting done log time sweep algorithm maintains linked list frontier pointers elements elements stored tptsmat entries frontier always strictly decreasing algorithm iterates rows grid top bottom list frontier initially empty updated row help tptsmat row empty update necessary row otherwise let rightmost entry row let column containing last element frontier also column element removed frontier replaced otherwise append end frontier algorithm inserts grj appropriate list levsetj since grj assumed increasing respect colexicographical order immediate access specifically added list levsetj leftmost element frontier lists levsetj maintained lexicographical order easy check grj desired algorithm stated pseudocode algorithm updating frontier row tptsmat takes constant time total cost updating frontier row must iterate frontier identify lists levsetj insert elements rows length frontier total cost iterations frontier inserting appropriate list levsetj takes constant time total cost insertions thus total number operations algorithm including cost initial sorting put right form log log algorithm algorithm building lists levsetj input grj represented list colexicographical order tptsmat output tptsmat updated levsetj grj list sorted lexicographical order initialize frontier empty linked list tptsmat entry row update frontier row let column containing rightmost element row last element frontier column remove last entry frontier append entry row column end frontier entry frontier add elements grades row lists levsetj last element frontier add grj levsetj else let element frontier add grj levsetj references atallah algorithms theory computation handbook crc press atiyah theorem application sheaves bulletin france bauer kerber reininghaus clear compress computing persistent homology chunks topological methods data analysis visualization iii pages springer bauer kerber reininghaus wagner phat persistent homology algorithms toolbox mathematical software icms volume lecture notes computer science pages springer berlin heidelberg bauer lesnick induced matchings barcodes algebraic stability persistence proceedings annual symposium computational geometry page acm biasotti cerri frosini giorgi new algorithm computing matching distance size functions pattern recognition letters fabri giezeman hert hoffmann kettner pion schirra geometry kernel cgal user reference manual cgal editorial board edition http bubenik silva scott metrics generalized persistence modules foundations computational mathematics pages carlsson topological pattern recognition point cloud data acta numerica may carlsson silva morozov zigzag persistent homology functions proceedings annual symposium computational geometry pages acm carlsson multiparameter hierarchical clustering methods classification tool research pages carlsson singh zomorodian computing multidimensional persistence algorithms computation pages carlsson zomorodian theory multidimensional persistence discrete computational geometry cerri fabio ferri frosini landi betti numbers multidimensional persistent homology stable functions mathematical methods applied sciences cerri fabio jablonski medri comparing shapes approximations matching distance computer vision chacholski scolamiero vaccarino combinatorial resolutions multigraded modules multipersistent homology arxiv preprint chazal glisse guibas oudot proximity persistence modules diagrams proceedings annual symposium computational geometry pages acm chazal guibas oudot stable signatures shapes using persistence proceedings symposium geometry processing pages eurographics association chazal geometric inference probability measures foundations computational mathematics pages chazal silva glisse oudot structure stability persistence modules arxiv preprint chazal silva oudot persistence stability geometric complexes geometriae dedicata chen kerber persistent homology computation twist proceedings european workshop computational geometry volume christofides analysis new heuristic travelling salesman problem report graduate school industrial administration cmu edelsbrunner harer stability persistence diagrams discrete computational geometry edelsbrunner morozov vines vineyards updating persistence linear time proceedings annual symposium computational geometry pages acm cormen leiserson rivest stein introduction algorithms mit press decomposition pointwise persistence modules journal algebra applications curry sheaves cosheaves applications dissertation university pennsylvania berg cheong van kreveld overmars computational geometry algorithms applications springer derksen weyman quiver representations notices ams edelsbrunner harer computational topology introduction american mathematical society edelsbrunner letscher zomorodian topological persistence simplification discrete computational geometry eisenbud commutative algebra view toward algebraic geometry volume springer science business media eisenbud geometry syzygies second course algebraic geometry commutative algebra volume springer science business media hoogeveen analysis christofides heuristic paths difficult cycles operations research letters kreuzer robbiano computational commutative algebra volume springer scala stillman strategies computing minimal free resolutions journal symbolic computation landi rank invariant stability via interleavings arxiv preprint lang algebra revised third edition graduate texts mathematics lesnick theory interleaving distance multidimensional persistence modules foundations computational mathematics lesnick wright computing multigraded betti numbers persistent homology modules cubic time preparation otter porter tillmann grindrod harrington roadmap computation persistent homology arxiv preprint schrijver theory linear integer programming john wiley sons toth rourke goodman handbook discrete computational geometry crc press wasserman statistics concise course statistical inference springer verlag webb decomposition graded modules proceedings american mathematical society zomorodian carlsson computing persistent homology discrete computational geometry notation index augmented arrangement page line arrangement page barcode persistence module page fibered barcode persistence module page box quadrant upper right corner lub page duality transforms page cell page continuous extension functor page template map page free implicit representation page flg floor function page grid function page dual graph page path page grw grade function set page ith homology functor page integers page coarseness persistence module page space affine lines slope page space affine lines finite slope page space affine lines finite positive slope page lifte lift map cell page lub least upper bound page multidimensional persistence module page dimension page integers page ord order map cell page set template points page set template points cell page pointwise finite dimensional page free implicit representation page pushl push map page poset category page rank rank invariant page union supports bigraded betti numbers page totally ordered partition page permutation page barcode template cell page ith graded betti number page map positive integers template points cell page poset category page | 0 |
hasan ali erkan jmti vol issue journal macrotrends technology innovation macrojournals automatic knot adjustment using dolphin echolocation algorithm curve approximation hasan ali erkan necmettin erbakan university school applied sciences department management information sciences selcuk university faculty engineering department computer engineering abstract paper new approach solve cubic curve fitting problem presented based algorithm called dolphin echolocation method minimizes proximity error value selected nodes measured using least squares method euclidean distance method new curve generated reverse engineering results proposed method compared genetic algorithm result new method seems successful keywords curve approximation cubic data parameterization dolphin echolocation algorithm knot adjustment introduction curve fitting classical problem computer aided geometric design example facto cad cam related graphic design industries geometric modeling areas parametric curves transformed rational similarly vector font modeling problems fonts often fitted practical applications distance target curve fitted curve must less predetermined tolerance resulting curve called approach euclidean distance method used measure value corresponding distance two curves hasan ali erkan jmti vol issue curve fitting problem problem curve fitting expressing target curve minimum tolerance curves target curve two three dimensional scope paper parameterization target data points convergence minimum error tolerance curves using automatically placed minimum control point constitute curve expressed equation ith control point main function curves main function curve given knot vector degree expressed equation information curves found methods data parameterization curves parametric curves target data points need parameterized curve fitting however calculation optimum data parameterization theoretically quite difficult different ways data parameterization used applications three methods uniform parameterization parameterization centripetal parameterization emerging researches based previous studies study centripetal parameterization method used euclidean distance minimization euclidean distance used calculate error target curve bspline fitted curve euclidean distance calculated equation ith data original dataset ith data fitted curve general hasan ali erkan jmti vol issue approach paper minimize distance express curve minimum control point time thus euclidean distance number control points treated together fitness function dolphin echolocation algorithm dolphin echolocation algorithm presented kaveh ferhoudi optimization algorithm inspired hunting principles bottlenose dolphins sonar waves dolphins explore entire search area specific effect hunt approach prey try focus target limiting number waves send limiting search algorithm implements search reducing distance target search space must sorted beginning search alternatives variable optimized must sorted ascending descending order alternatives one characteristic sorted according important one use technique example variable vector length laj forms columns alternatives matrix addition convergence curve used change convergence factor optimization process variation trend throughout iterations calculated equation probability randomly selected probability first iteration loopi number current iteration power rank curve loopsnumber total number iterations algorithm requires location matrix lnl variable number location count main steps dolphin echolocation discrete optimization follows create locations randomly dolphin current iteration calculated using equation fitness calculated every location calculate cumulative fitness according following dolphin rules number locations number variables find position jth column alternatives matrix name cumulative fitness jth variable selected alternative numbering alternatives ordering alternative matrix end end end hasan ali erkan jmti vol issue diameter influence neighbor affected cumulative fitness alternative recommended diameter search space fitness fitness ith location fitness defined best answers get higher value words goal optimization maximize fitness must calculated using reflective property adding alternatives near edges valid laj case distance alternative edge small alternatives appear mirror mirror placed edge small value added sequences order distribute probabilities uniformly search space chosen according way describing fitness best choice lower lowest fitness value achieved find best location loop call best location find alternatives assigned best location variables set zero another saying number variables number alternatives best location variable calculate probability choosing alternative alj endequation according end end assign probability alternatives selected variables best location distribute remaining probability alternatives according form number variables number alternatives best location calculate else next step locations according assigned probability alternate repeat steps maximum iteration number times end end end hasan ali erkan jmti vol issue automatic knot adjustment dolphin echolocation algorithm problem curve fitting fitted curve tried converge target curve minimum tolerance minimum control point case nodes must selected given points error tolerance number control points nearest curve minimum thus array bits expressed selected nodes thus alternatives variable location dolphin echolocation called solution solutions illustrated figure figure sample solution illustration example possible express points way control points calculated selected nodes aim dolphin echolocation maximizing fitness equation used fitness function curve fitting process dolphin echolocation algorithm follows create random solutions startup population calculate current iteration calculate fitness value possible solutions calculate cumulative fitness variables possible solution find best solution according maximum fitness set cumulative fitness solutions variables variables equal variables best solution calculate probabilities alternatives variable solutions set probabilities alternatives equal variables best solution probability current iteration hasan ali erkan jmti vol issue find possible solutions used next iteration probabilities alternatives variable repeat steps number iteration times experimental results experimental curve target curve points approximation results degree curves shown table genetic algorithm iteration rmse euclidean number distance control point dolphin echolocation algorithm fitness rmse euclidean number distance control point fitness table experimental results different number iteration plotted experimental results shown figure figure original curve genetic algorithm dolphin echolocation algorithm hasan ali erkan jmti vol issue epitrochoid curve target curve points curve equation follows cos cos sin sin parameters approximation results degree curve curve calculated table genetic algorithm iteration rmse euclidea distance number control point dolphin echolocation algorithm fitnes rmse euclidea distance number control point fitness table experimental results different number iteration hasan ali erkan jmti vol issue plotted experimental results shown figure figure original curve genetic algorithm dolphin echolocation algorithm archimedean spiral target curve points curve equation follows cos sin approximation results degree curve curve calculated shown table genetic algorithm iteration rmse euclidean distance dolphin echolocation algorithm number fitness rmse euclidean number distance control control point point table experimental results different number iteration fitness hasan ali erkan jmti vol issue plotted experimental results shown figure figure original curve genetic algorithm dolphin echolocation algorithm vivaldi curve target curve curve points curve equation follows cos sin sin approximation results degree curve curve calculated shown table genetic algorithm iteration rmse euclidea distance number control point dolphin echolocation algorithm fitness rmse euclidea distance table experimental results different number iteration plotted experimental results shown figure number control point fitness hasan ali erkan jmti vol issue figure original curve genetic algorithm dolphin echolocation algorithm conclusion feature work paper addresses problem curve fitting noisy data points using curves given set noisy data points goal compute parameters approximating polynomial curve best fits set data points sense difficult overdetermined continuous multimodal multivariate nonlinear optimization problem proposed method solves applying dolphin echolocation algorithm experimental results show presented method performs well fitting data points high degree accuracy comparison popular previous approach genetic algorithm problem also carried shows method outperforms previous approaches examples discussed paper future work includes extension method families curves nurbs parametric curves extension results case explicit surfaces also part future work references park lee curve fitting based adaptive curve refinement using dominant points design boor boor boor boor practical guide splines vol new york piegl tiller nurbs book springer science business media park choosing nodes knots closed curve interpolation point data computeraided design vassilev fair interpolation approximation energy minimization points insertion design wang cheng barsky energy interproximation design kaveh farhoudi new optimization method dolphin echolocation advances engineering software | 9 |
integer programming constraint matrix fedor fahad saket nov department informatics university bergen norway technische wien vienna austria ramanujan institute mathematical sciences hbni chennai india saket abstract classic integer programming problem objective decide whether given matrix integer solving important step numerous algorithms important obtain understanding precise complexity problem function natural parameters input two significant results line research time algorithms number constraints constant papadimitriou acm corresponding constraint matrix constant cunningham geelen ipco paper prove matching upper lower bounds corresponding constant lower bounds provide evidence algorithm cunningham geelen probably optimal also obtain separate lower bound providing evidence algorithm papadimitriou close optimal introduction classic integer programming problem input integer matrix objective find integer one exists solving problem denoted important step numerous algorithms important obtain understanding precise complexity problem function natural parameters input papadimitriou showed solvable time instances number constraints constant proof consists two steps first step combinatorial showing entries solution also solution second algorithmic step shows solution maximum entry problem solvable time particular matrix happens algorithm runs time max natural question therefore whether algorithm papadimitriou improved significantly general particular case first theorem provides conditional lower bound indicating significant improvements unlikely precise prove following theorem theorem unless exponential time hypothesis eth fails matrix solved time log max even constraint matrix entry feasible solution eth conjecture solved time formulas due theorem simple dynamic programming algorithm maximum entry solution well constraint matrix bounded already close optimal fact constraint matrix lower bound asymptotically almost matches running time papadimitriou algorithm hence conclude obtaining significant improvement algorithm papadimitriou matrices least hard obtaining time algorithm fact observe based setting parameters lower bound rules several interesting running times instance immediately get lower bound continuing quest faster algorithms cunningham geelen suggested new approach solving utilizes branch decomposition matrix motivated fact result papadimitriou interpreted result matrices constant rank parameter upper bounded rank plus one robertson seymour introduced notion branch decompositions corresponding notion graphs generally matroids branch decompositions immense algorithmic significance numerous problems solved polynomial time graphs matroids constant branchwidth matrix denotes matroid whose elements columns whose independent sets precisely linearly independent sets columns postpone formal definitions branch decomposition till next section matrix cunningham geelen showed constant solvable time theorem cunningham geelen matrix given together branch decomposition column matroid width solvable time max upper bounds matrix theorem matrix lower bounds log algorithm eth theorem even matrix algorithm seth theorem even matrix algorithm seth theorem even matrix figure summary lower bound results comparison upper bound results number variables constraints respectively denotes column matroid denotes bound largest entry fact also show assumption unavoidable without assumptions bounded domain variables setting constraint matrix allowed negative values fact even restricted branchwidth column matroid close inspection instances construct reduction shows column matroids resulting constraint matrices fact direct sums circuits implying even bounded parameter closely related notion linear code parameter commonly used coding theory matrix computing column matroid equivalent computing linear code generated roughly speaking column matroid permutation columns matrix obtained applying every dimension subspace obtained taking intersection subspace spanned first columns subspace spanned remaining columns value parameter always least value result upper bounds complexity terms translate upper bounds terms larger parameter rank number constraints lower bounds complexity terms translate lower bounds terms smaller parameter motivated fact study question designing optimal time algorithm column matroid constant first obtain following upper bound theorem matrix given together path decomposition column matroid width solvable time max mentioned earlier constant instances also holds constant instances hence assumption unavoidable well furthermore proof theorem hard fact almost identical proof theorem upper bound becomes really interesting placed context compared tight lower bounds provide next two theorems form main technical part paper theorems provide tight conditional subject strong eth lower bounds matching running time algorithm theorem see figure strong eth seth conjecture solved time formulas constant eth seth first introduced work impagliazzo paturi built upon earlier work impagliazzo paturi zane obtain following lower bounds first result shows relax factor theorem even allow running time arbitrary function depending second result shows similar lower bound terms instead put together results imply matter much one allowed compromise either bound unlikely algorithm theorem improved theorem unless seth fails even constraint matrix solved time function max column matroid theorem unless seth fails even constraint matrix solved time function max column matroid although proofs lower bounds similar structure believe sufficiently many differences proofs warrant stating separately finally since matroid never exceeds lower bounds hold parameter interest chosen column matroid well seth algorithm constraint matrices denotes branchwidth column matroid almost matching upper bound theorem related work currently eth commonly accepted conjecture serves basic tool used establishing asymptotically optimal lower bounds various parameterized exact exponential algorithms consensus seth hypothesis already played crucial role recent spectacular rapid development analyses polynomial parameterized exact exponential algorithms particular seth used establish conditional tight lower bounds number fundamental computational problems including diameter sparse graphs dynamic connectivity problems frechet distance computation string editing distance dynamic programming graphs bounded steiner tree subset sum finding longest common subsequence dynamic time warping distance matching regular expressions overview applications eth seth refer surveys well chapter work extends line research adding fundamental problem classes problems organization paper remaining part paper organized follows main technical part paper devoted proving theorem theorem therefore set requisite preliminary definitions begin section prove theorem first part section contains overview reductions could helpful reader navigating paper prove theorem section theorem section completing results constant theorem section preliminaries assume reader familiar basic definitions linear algebra matroid theory graph theory notations use denote set non negative integers real numbers respectively positive integer use denotes sets respectively convenience say two vectors use denote ith coordinate write often use denote whose length clear context matrix denote submatrix obtained restriction rows indexed columns indexed matrix write ith column say multiplier column matroids notion graphs implicitly matroids introduced robertson seymour let matroid universe set family independent sets use denote rank function maxs connectivity function defined matrix use denote case connectivity function following interpretation define span span set columns restricted span subspace spanned columns easy see dimension equal tree cubic internal vertices degree branch decomposition matroid universe set cubic tree mapping maps elements leaves let edge forest consists two connected components thus every edge corresponds partitioning two sets leaves leaves width edge width branch decomposition maximum edge width maximum taken edges finally minimum width taken possible branch decompositions matroid defined follows let remind caterpillar tree obtained path attaching vertices paths leaves matroid minimum width branch decomposition cubic caterpillar let note every mapping elements matroid leaves cubic caterpillar correspond ordering jeong kim oum gave constructive tractable algorithm construct path decomposition width column matroid given matrix eth seth let infimum set constants exists algorithm solving variables clauses time hypothesis eth strong hypothesis seth formally defined follows eth conjectures seth proof theorem section prove unless seth fails matrix solved time function max column matroid subsection give overview reductions subsection give detailed proof theorem overview reductions prove theorems giving reductions parameters reduced instances required obey certain strict conditions example reduction give prove theorem must output instance column matroid constraint matrix constant similarly reduction used prove theorem need construct instance largest entry target vector upper bounded constant stringent requirements parameters make reductions quite challenging however reductions seth take super polynomial even take time number variables instance freedom avail exponential time reductions used crucially proofs theorems give overview reduction used prove theorem let instance variables clauses given fixed constant construct instance satisfying certain properties since every different viewed family instances particular main technical lemma following lemma let instance variables clauses let fixed integer time construct instance following properties satisfiable feasible matrix dimension column matroid largest entry lemma proof theorem follows following observation algorithm solving time use algorithm refute seth particular given instance choose appropriate depending construct instance run careful choice imply faster algorithm refuting seth formally choose integer total running time test whether satisfiable time require construct plus time required solve constructed instance time required test whether satisfiable constant depending choice important note utility reduction described lemma extremely sensitive value numerical parameters involved particular even blows slightly say largest entry blows slightly say calculation give desired refutation seth thus challenging part reduction described lemma making work strict restrictions relevant parameters stated lemma reduction need obtain constraint matrix small important first step towards understanding matrix small looks like first give intuitive description structure matrices let matrix small let column matroid matrix column matroid pictorial representation matrix figure comparison low matrix let denote set columns vectors whose index first columns let denote set columns index strictly greater max dimhspan span hence order obtain bound pathwidth sufficient bound dimhspan span every consider example matrix given figure clearly reduced instance constructed constraint matrix appropriate extension form replaced submatrix order see fig pictorial representation construction used lemma takes input instance variables fixed integer outputs instance satisfies four properties lemma let denote set variables input cnfformula purposes present discussion assume divides partition variable set blocks size let denote set assignments variables corresponding set clearly size upper bounded denote assignments construct matrix view assignments different assignment clause words separate sets column vectors constraint matrix corresponding different pairs clause block partition values set columns based assignments clause based clause assignments total columns corresponding set columns corresponding set columns corresponding together forms bigger block columns denoted columns appears consecutively words clauses partition set columns columns parts occur consecutively thus identify column matrix pair pair refer assignment part pair values covered specific consecutive rows rows divided parts according roles reduction first rows comprise predecessor matching part middle row called evaluation part rows evaluation part comprise successor matching part entries row corresponding evaluation part get values depending whether assignment part pair satisfies matrix target vector constructed way feasible solutions set set columns hence setting coordinate corresponds choosing particular column reduction use selector gadget enforce feasible solution choose exactly one column set columns corresponding pair corresponds choosing column identified thus results choosing assignment variables set note implies choose assignment clause way might choose assignments corresponding different clauses however backward direction proof important choose assignment clause ensure selected assignment variables towards assign values columns way assignments chosen feasible solution particular block across different clauses choosing two columns one set columns corresponding columns feasible solution would imply columns correspond one particular assignment case say two columns consistent enforce consistencies sequential manner block make sure two columns chosen among columns corresponding consistent opposed checking consistency every pair thus sense consistencies propagate propagation consistencies realized rows corresponding predecessor matching part successor matching part rows corresponding predecessor matching part successor matching part rows corresponding successor matching part predecessor matching part predecessor matching part well successor matching part contain designated rows block variables handle consistencies recall denotes set assignments furthermore assignments denoted thus identify assignment integer values assigned manner designated places predecessor matching part well successor matching part enabling argue consistency largest entry upper bounded furthermore idea making consistency sequential manner also allows bound column matroid proof technique theorem similar theorem achieved modifying matrix constructed reduction described lemma largest entry values represented binary string length remove row say row indexed entries greater replace rows value bit binary representation modification reduces largest entry increases constant approximately finally set entries concludes overview reductions proceed detailed exposition detailed proof theorem towards proof theorem first present proof main technical lemma lemma restate sake completeness lemma let instance variables clauses let fixed integer time construct instance following properties satisfiable feasible matrix dimension column matroid largest entry let instance variable set let fixed constant given statement lemma construct instance follows construction let without loss generality assume divisible otherwise add dummy variables divisible divide blocks let block exactly assignments denote assignments create matrices one clause matrices submatrices constraint matrix clause create matrix block possible assignments variables allocate columns assignment two columns corresponding first columns correspond assignments second columns correspond assignments etc matrices first define indices matrices slightly different structure compared matrices define separately values defined follows assignment identified number defines entries four column numbered four column numbered rows partitioned parts part composed first rows called predecessor matching part part composed row indexed called evaluation part part composed last rows called successor matching part see fig predecessor matching part defined evaluation part defined satisfies otherwise predecessor matching part evaluation part successor matching part evaluation part successor matching part parts parts figure parts successor matching part defined entries defined set zero describing construction provide brief description certain desirable properties possessed designated set columns per pair indexed ensures one columns set chosen feasible solution forced putting corresponding coordinate vector construction add zeros entries row corresponding row outside submatrix guarantees exactly one chosen feasible solution purpose ensure consistency column selected purpose ensures consistency column selected construct matrix way row indexed row indexed equal suppose row indexed ensure choose consistent columns columns sum values coordinate selected columns equal set target vector assignment two designated columns indexed reason creating two columns instead one per following coordinate target vector corresponding row contains row indexed set satisfying assignment one partial assignments assignments restricted different blocks may satisfy clause among pairs columns corresponding satisfying partial assignments feasible solution choose first column pair one partial assignment assignment block satisfies feasible solution choose second column corresponding equations make sure entries corresponding coordinate set chosen columns feasible solution add hence least one selected column would correspond assignment block satisfying clause figure let assignments entries defined according colored red blue respectively matrix left represents obtained deleting yellow colored portion left matrix matrix right represents sometimes helpful focus positions containing elements found two matrices bottom figure matrices matrix created exception remove predecessor matching part see fig matrix created exception remove rows numbered illustration given fig formally entries defined given define entries satisfies otherwise satisfies otherwise entries defined set zero matrix vector explain construct constraint matrix vector would serve instance follows simplify notation using instead instead matrices disjoint submatrices cover non zero entries informally submatrices form chain rows corresponding successor matching part rows predecessor matching part pictorial representation found fig formally matrix let every let let put matrix entries belonging submatrices set zero completes construction define vector let words contains indices rows alternating rows successor matching part thus alternating rows predecessor matching part belong refer fig entries defined otherwise completes construction matrix vector together make required instance correctness prove satisfiable integer vector start notations partition set columns parts already defined sets one part per clause set columns associated divide equal parts one per variable set parts words set columns associated tuple set divided parts size two one per tuple two columns associated tuple indexed also put number columns lemma formula satisfiable exists proof suppose satisfiable let satisfying assignment exists union clause satisfied least one assignments fix arbitrary assignment satisfies clause let function fixes assignments clause assignment satisfies clause every define prove let vector defined setting otherwise entry multiplier column say entry corresponds column tuple one entry among two entries corresponding columns associated set second column corresponding set otherwise first column corresponding set entries set zero also note every let notice among columns exactly one column indexed belongs column corresponds one two columns corresponding need following auxiliary claims claim every proof first consider case let follows prove show observe consider case let case finally claim every proof proof claim similar proof claim let show recall let number rows prove need show otherwise consider following exhaustive cases case let notice every implies every claims claims case partition consider based parts let case let notice implies hence claims claims case let construction implies consider two cases based recall function satisfies equation definition using fact satisfies hence equation definition definition using fact satisfies case let definition every let hence lemma column matroid proof recall number columns number rows prove sufficient show dimhspan span idea proving equation based following observation let exists dimension span span thus prove construct corresponding set show cardinality proceed details let column vectors let let need show dimhspan span let exists know partitioned parts fix let let max recall definition sets construction matrix way constructed matrix every every vector also every implies partition parts parts defined follows otherwise otherwise otherwise otherwise claim proof entries covered disjoint hence claim follows claim proof claim trivially follows let let notice every claim every consider vector notice let construction thus every implies claim proof claim holds assume consider let notice hence claim consider let construction completes proof claim claim proof claim trivially holds let consider let notice hence claim consider vector notice let construction hence shown implies claim proof consider case consider claim suppose let construction thus contradicts assumption suppose let construction thus contradicts assumption hence case proved assume consider let notice hence claim also notice hence claim potential let definition hence conclude possible proof similar case claim suppose let construction thus contradicts assumption suppose let construction thus contradicts assumption hence case well completes proof claim therefore claims completes proof lemma proof theorem prove theorem assuming fast algorithm use give fast algorithm refuting seth let instance variables clauses choose sufficiently large constant holds use reduction mentioned lemma construct instance solution satisfiable reduction takes time let constraint matrix dimension largest entry vector exceed assuming instance constraint matrix solvable time maximum value entry constants solvable time constant subsumed term whether satisfiable hence total running time testing completes proof theorem proof sketch theorem section prove matrix solved time function unless seth fails max column matroid section gave reduction however reduction values constraint matrix target vector large number variables constant let number clauses section briefly explain get rid large values cost making large still bounded construct matrix described section rows contain values strictly greater values rows indexed set values greater alternate rows colored portion except last rows figure recall largest value number less equal represented binary string length construct new matrix replacing row whose index set rows value write binary representation column corresponding newly added rows replace row rows value bit binary representation let number rows target vector defined completes construction reduced instance correctness proof reduction using arguments similar used correctness lemma lemma column matroid proof sketch proof similar proof lemma define like section fact rows rows obtained process explained construct need show dimhspan span number columns proof proceeds bounding number indices exist vectors arguments similar ones used proof lemma show corresponding set indices subset recall partition lemma partition parts notice set rows covers values strictly greater set obtained respectively process mentioned construct row replaced rows allows bound following terms using fact system inequalities show dimhspan span completes proof lemma proof theorem follows lemma correctness reduction similar arguments proof theorem proof theorem section sketch proof cunningham geelen theorem adapted prove theorem recall path decomposition width obtained time function making use algorithm jeong however know path decomposition constructed time assumption path decomposition given essential roughly speaking difference proof parameterized branchwidth operation merge operation construct new set partial solutions vectors two already computed sets sizes thus construct new set vectors one possible pairs vectors sets takes time roughly parameterization new partial solution set constructed two sets time one set contains vectors second contains vectors allows construct new set time roughly recall define span span key lemma proof theorem following lemma let number vectors prove theorem without loss generality assume columns ordered way every dimhspan span let obtained appending end dimhspan span use dynamic programming check whether following conditions satisfied let set vectors exists solution initially algorithm computes lemma fact ith column vector algorithm computes increasing order outputs yes computed already computed sets notice exist algorithm enumerates vectors satisfying condition vector included satisfy conditions since lemma number vectors satisfying condition hence exponential factor required running time follows provides bound claimed exponential dependence running time algorithm bound polynomial component running time follows exactly arguments proof theorem section prove unless eth fails solved time log max even constraint matrix entries feasible solution proof reduction sat formula variables clauses create equivalent instance integer matrix order largest entry reduction work polynomial time let input sat let set variables set clauses create number vectors length two per variable two per clause make two vectors vxi vxi defined follows set otherwise otherwise vxi vxi put vxi vxi define vxi vxi every clause define two vectors vcj follows define vcj otherwise set vcj every put vcj matrix constructed using vectors columns columns ordered vxn vxn vcm vcm vector defined follows lemma formula satisfiable feasible proof suppose formula satisfiable let satisfying assignment define prove otherwise otherwise completes definition first entries entries last entries defined follows every define number literals set number literals set otherwise number literals set number literals set otherwise proceed prove indeed feasible solution claim proof towards need show every consider following exhaustive cases case fact clause literals along definition implies number entries set also indices set one correspond literal definition fact satisfying assignment let hence vci vci construction case definition vectors vxj vxj definition exactly one set implies construction every therefore case let construction set zero set implies completes proof claim converse direction statement lemma suppose exists need show satisfiable first argue exactly one set set follows fact iii define assignment prove satisfying assignment define claim satisfies clauses consider clause since feasible solution implies let notice construction distinct columns ith column vector construction entries row numbered proved notice implies least one among corresponding entry row implies satisfies completes proof lemma show every feasible solution largest entry notice implies feasible solution construction exists along fact implies every feasible solution hence largest entry feasible solution following lemma completes proof theorem lemma algorithm runs time eth fails log max proof sparsification lemma know sat variables clauses constant solved time time suppose algorithm alg running time log formula variables clauses create instance discussed section polynomial time matrix dimension largest entry rank lemma run alg test whether satisfiable takes time log hence refuting eth conclusion conclude several open questions first lower bounds constraint matrix tight parameterization gap lower upper bounds parameterization closing gap first natural question proof theorem consists two parts first part bounds number potential partial solutions corresponding edge branch decomposition tree second part dynamic programming branch decomposition using fact number potential partial solutions bounded bottleneck algorithm following subproblem given two vector sets partial solutions set size need construct new vector set partial solutions set size vector sum vector vector thus construct new set vectors one possible pairs vectors sets takes time roughly tempting approach towards improving running time particular step could use fast subset convolution matrix multiplication tricks work well join operations dynamic programming algorithms tree branch decompositions graphs see also chapter unfortunately reason suspect tricks may help matrices solving subproblem time would imply solvable time believed unlikely problem asks whether given set integers contains three elements sum zero indeed consider equivalent version named defined follows given sets integers cardinality objective check whether exist solvable time well see theorem however problem equivalent time consuming step algorithm theorem integers input thought vectors observation rule existence algorithm solving constraint matrices time indicates interesting improvement running time would require completely different approach final open question obtain refined lower bound bounded rank recall constraint matrix algorithm papadimitriou contain negative values improving running time algorithm showing running time tight seth still interesting question references abboud backurs williams tight hardness results lcs sequence similarity measures proceedings annual symposium foundations computer science focs ieee computer society abboud williams popular conjectures imply strong lower bounds dynamic problems proceedings annual symposium foundations computer science focs ieee computer society backurs indyk edit distance computed strongly subquadratic time unless seth false proceedings annual acm symposium theory computing stoc acm regular expression patterns hard match proceedings annual symposium foundations computer science focs ieee computer society appear bringmann walking dog takes time frechet distance strongly subquadratic algorithms unless seth fails proceedings annual symposium foundations computer science focs ieee computer society bringmann quadratic conditional lower bounds string problems dynamic time warping proceedings annual symposium foundations computer science focs ieee computer society cook seymour tour merging via informs journal computing cunningham geelen integer programming constraint matrix proceedings international conference integer programming combinatorial optimization ipco vol lecture notes comput springer curticapean marx tight conditional lower bounds counting perfect matchings graphs bounded treewidth cliquewidth genus proceedings annual symposium discrete algorithms soda siam cygan dell lokshtanov marx nederlof okamoto paturi saurabh problems hard proceedings ieee conference computational complexity ccc ieee cygan fomin kowalik lokshtanov marx pilipczuk pilipczuk saurabh parameterized algorithms springer cygan kratsch nederlof fast hamiltonicity checking via bases perfect matchings proceedings annual acm symposium theory computing stoc acm cygan nederlof pilipczuk pilipczuk van rooij wojtaszczyk solving connectivity problems parameterized treewidth single exponential time proceedings annual symposium foundations computer science focs ieee dorn dynamic programming fast matrix multiplication proceedings annual european symposium algorithms esa vol lecture notes comput springer berlin fomin golovach lokshtanov saurabh almost optimal lower bounds problems parameterized siam computing fomin thilikos dominating sets planar graphs exponential siam computing gajentaan overmars class problems computational geometry comput parse trees monadic logic matroids combinatorial theory ser tutte polynomial matroids bounded combinatorics probability computing horn kschischang intractability permuting block code minimize trellis complexity ieee trans information theory impagliazzo paturi complexity computer system sciences impagliazzo paturi zane problems strongly exponential complexity computer system sciences jeong kim oum constructive algorithm matroids proceedings annual symposium discrete algorithms soda siam lokshtanov marx saurabh known algorithms graphs bounded treewidth probably optimal proceedings annual symposium discrete algorithms soda siam lokshtanov marx saurabh lower bounds based exponential time hypothesis bulletin eatcs papadimitriou complexity integer programming acm williams possibility faster sat algorithms proceedings annual symposium discrete algorithms soda siam robertson seymour graph minors obstructions combinatorial theory ser roditty williams fast approximation algorithms diameter radius sparse graphs proceedings annual acm symposium theory computing stoc acm van rooij bodlaender rossmanith dynamic programming tree decompositions using generalised fast subset convolution proceedings annual european symposium algorithms esa vol lecture notes comput springer williams hardness easy problems basing hardness popular conjectures strong exponential time hypothesis invited talk proceedings international symposium parameterized exact computation ipec vol leibniz international proceedings informatics lipics dagstuhl germany schloss fuer informatik | 8 |
dimension rigidity lattices semisimple lie groups jul cyril lacoste abstract prove lattice group isometries symmetric space type without euclidean factors virtual cohomological dimension equals proper geometric dimension introduction let discrete virtually group exist several notions dimension one virtual cohomological dimension vcd cohomological dimension torsionfree finite index subgroup due result serre depend choice subgroup see another one proper geometric dimension said model stabilizers action finite every finite subgroup fixed point space contractible note two models homotopy equivalent proper geometric dimension smallest possible dimension model two notions related fact always inequality vcd inequality may strict see instance construction leary nucinkis examples however also many examples virtually groups vcd instance degrijse martinezperez prove case large class groups containing finitely generated coxeter groups examples equality found paper prove equality holds groups acting isometries discretely finite covolume symmetric spaces type without euclidean factors theorem let symmetric space type without euclidean factors vcd every lattice isom cyril lacoste recall symmetric space type without euclidean factors form semisimple lie group assumed connected centerfree maximal compact subgroup isom aut aut lie algebra note group semisimple linear algebraic may connected authors prove theorem lattices classical simple lie groups heavily rely results techniques discuss applications theorem first note symmetric space model theorem yields corollary symmetric space type without euclidean factors isom lattice homotopy equivalent proper cocompact complex dimension vcd stress setting theorem considering full group isometries consequence able deduce equality virtual cohomological dimension proper geometric dimension lattices isom also groups abstractly commensurable two groups said abstractly commensurable finite index exists subgroup isomorphic obtain theorem corollary group abstractly commensurable lattice group isometries symmetric space type without euclidean factors vcd remark note general equality proper geometric dimension virtual cohomological dimension behaves badly commensuration instance fact exist virtully groups vcd vcd proves subgroup finite index vcd whereas commensurable vcd fact concrete exemples groups corollary fails among familiar classes groups instance authors prove finitely generated coxeter group vcd authors construct finite extensions certain coxeter groups vcd returning applications theorem obtain corollary lattices isom dimension rigid sense say virtually group dimension rigid vcd every group contains finite one index normal subgroup dimension rigidity lattices semisimple lie groups dimension rigidity strong impact behaviour proper geometric dimension group extensions obtain corollary cor corollary lattice group isometries symmetric space type without euclidean factors short exact sequence sketch strategy proof theorem begin note symmetric spaces riemannian nonriemannian play key role considerations time working ambient lie group fact convenient reformulate theorem follows main theorem let semisimple lie algebra vcd every lattice aut key ingredient proof main theorem hence theorem result meintrup basically asserts proper geometric dimension equals bredon cohomological dimension see theorem precise statement light theorem suffices prove two cohomological notions dimension vcd coincide authors noted prove equality vcd suffices ensure fixed point sets finite order elements small dimension see section details still authors checked case lattices contained classical simple lie groups use similar strategy prove main theorem lattices groups automorphisms simple lie algebras recall finite dimensional simple lie algebra either isomorphic one classical types one exceptional ones classical lie algebras complex ones real forms similarly exceptional lie algebras five complex ones twelve real forms cyril lacoste number brackets difference dimension adjoint group twice dimension maximal compact subgroup equals complex lie group illustrate basic steps proof main theorem example suppose aut lattice consider symmetric space psl prove vcd suffice establish dim dim rkr psl every finite order non central see lemma first note composition inner automorphism outer automorphism since every element order follows inner automorphism non trivial use results section general inner automorphism get holds reduced case trivial meaning order automorphism aut induced automorphism adjoint group gad psl still denoted also involution fixed point set riemannian symmetric space associated set fixed points notice quotient gad nonriemannian symmetric space symmetric spaces associated simple groups classified berger case gad psl obtain classification lie algebra either compact isomorphic appear even armed information check every involution leads main theorem argument sketched applied section complex simple lie algebras section real ones since arguments similar since complex case somewhat easier advise reader skip section first reading dealt simple lie algebras treat section semisimple case method simple algebras work first sight proof eventually simpler idea restrict irreducible lattices decomposed product show rational rank irreducible lattice lower real rank factor adjoint group meaning get much improved bound fact lead rapidly main theorem finally note proof main theorem construct concrete model dimension vcd prove dimension rigidity lattices semisimple lie groups existence however worth mentioning cases models known instance symmetric space admits deformation retract dimension vcd called retract see interesting groups acknowledgements author thanks dave witte morris help dieter degrijse interesting discussions juan souto useful advice instructive discussions preliminaries section recall basic facts definitions algebraic groups lie groups lie algebras symmetric spaces lattices arithmetic groups virtual cohomological dimension bredon cohomology algebraic groups lie groups algebraic group subgroup determined collection polynomials defined subfield polynomials chosen coefficients galois criterion see prop says defined stable galois group gal algebraic group ring note set elements entries algebraic group defined groups lie groups finitely many connected components fact zariski connected connected lie group whereas may connected case algebraic group lie group said simple every connected normal subgroup trivial semisimple every connected normal abelian subgroup trivial note semisimple algebraic group defined semisimple lie group connected semisimple complex linear lie group algebraic connected semisimple real linear lie group identity component group real points algebraic group defined recall two lie groups isogenous locally isomorphic meaning exist finite normal subgroups identity components isomorphic semisimple linear lie group isogenous product simple lie groups center semisimple algebraic group finite also case semisimple linear lie groups semisimple lie groups general quotient semisimple algebraic group see thm moreover defined cyril lacoste connected algebraic group torus diagonalizable meaning exists every diagonal torus particular abelian isomorphic algebraic group product defined said conjugating element chosen torus algebraic group subgroup torus said maximal strictly contained torus important fact two maximal tori conjugate defined two maximal tori conjugate element denoted rkk rkk dimension maximal torus rank refer basic facts algebraic groups lie groups lie algebras automorphisms recall lie algebra lie group set vector fields subalgebra subspace closed lie bracket ideal subalgebra lie algebra simple abelian ideals semisimple abelian ideals lie group simple resp semisimple lie algebra simple resp semisimple semisimple lie algebra isomorphic finite direct sum simple ones lie third theorem finite dimensional real lie algebra always case exists connected lie group unique covering whose lie algebra means exists unique simply connected lie group associated every connected lie group whose lie algebra quotient subgroup contained center particular gad unique connected centerfree lie group associated group gad called adjoint group adjoint group linear algebraic group whereas universal cover may linear see instance universal cover psl follows classification simple lie algebras correspondance simple lie groups lie algebra said compact adjoint group automorphism lie algebra bijective linear endomorphism preserves lie bracket group automorphisms denoted aut linear algebraic connected general lie group associated differential lie group automorphism automorphism conversely either simply connected connected centerfree automorphism comes automorphism case often identify two automorphisms denote letter inner automorphism derivative conjugation element denote group dimension rigidity lattices semisimple lie groups inn inner automorphisms normal subgroup aut also identity component aut isomorphic adjoint group gad semisimple subgroup inn finite index aut quotient aut finite group outer automorphisms moreover simple seen subgroup aut aut semidirect product inn aut inn see note even complex let aut group real automorphisms complex simple aut contains complex automorphism group autc subgroup index quotient generated complex conjugation see prop recall complex lie algebra real form real lie algebra whose complexification real form group fixed points conjugation meaning involutive real automorphism antilinear refer facts lie algebras automorphisms simple lie groups simple lie algebras outer automorphisms mentioned previous section classification simple lie groups isogeny simple lie algebras correspondance due cartan see simple lie groups every linear simple lie group isogenous either classical group one finitely many exceptional groups denote transpose matrix conjugate transpose consider particular matrices idn idq classical simple lie groups groups following list det det det similarly give list compact ones son sun cyril lacoste spn compact exceptional lie groups ones complex ones complexifications previous compact groups real forms refer definitions complete descriptions simply connected versions exceptional lie groups note paper always consider centerless versions notations usual simple lie algebra associated simple lie group denoted gothic caracters instance lie algebra note adjoint group psl classification simple lie algebras runs parallel simple lie groups following table summarizes structure outer automorphisms groups simple lie algebras see section denote symmetric group dihedral group dimension rigidity lattices semisimple lie groups others complex lie algebras odd even odd odd even odd even others real lie algebras table outer automorphisms groups simple lie algebras note isomorphisms corresponding ones real forms simple isomorphic symmetric spaces let lie group symmetric space space form involutive automorphism fixed points set said irreducible decomposed product algebraic point view irreducibility implies lie algebra maximal subalgebra lie algebra equivalently irreducibility implies identity component maximal connected lie subgroup identity component another point view symmetric spaces based lie algebras symmetric space lie algebra involutive automorphism induces involutive automorphism whose fixed point set lie algebra thus always associate symmetric space linear space called local symmetric space lie subalgebra called isotropy algebra cyril lacoste generally say isotropy algebra fixed point set involutive automorphism conversely lie algebra simply connected connected centerless lie group whose lie algebra isotropy algebra local symmetric space lifts symmetric space aut aut classification symmetric spaces correspondance local symmetric spaces done berger note simple complex aut involution either case also complex means conjugation real form note also real involution extended involution complexification isotropy algebra complexification give list isotropy algebras local symmetric spaces associated real forms table isotropy algebras real forms table organized follows first line give complex isotropy algebras fixed complex involution column consists real forms complex algebra first entry local symmetric spaces associated form instance form instance ones associated real form form instance following tables summarize classification simple lie algebras organized similar way dimension rigidity lattices semisimple lie groups table isotropy algebras real forms table isotropy algebras real forms table isotropy algebras real form table isotropy algebras real forms table isotropy algebras real forms table isotropy algebras real forms cyril lacoste table isotropy algebras real forms note symmetric spaces given tables irreducible instance results precise refer list irreducible symmetric spaces ones refer facts symmetric spaces local symmetric spaces riemannian symmetric spaces stress symmetric spaces associated isotropy subalgebras tables discuss features riemannian symmetric spaces form compact symmetric spaces riemannian spaces nonpositive curvature called symmetric spaces type form isom maximal compact subgroup euclidean factors semisimple linear centerless recall lie group maximal compact subgroups conjugated semisimple generally reductive maximal compact subgroup symmetric space called riemannian symmetric space associated follows identify smooth manifold set maximal compact subgroups remark isogenous lie groups isometric associated riemannian symmetric spaces particular semisimple linear lie group associated riemannian symmetric space associated identity component thus assume connected centerless case image maximal compact subgroup automorphism maximal compact subgroup action aut aut isometries finally group isometries symmetric space type without euclidean factors aut lie algebra important part work compute dimensions fixed point sets dimension rigidity lattices semisimple lie groups isom aut assuming connected centerless fixed point set riemannian symmetric space associated recall denote letter automorphism denote fixed point set inner automorphism case finite order conjugated maximal compact subgroup fixed point set centralizer gad maximal compact subgroup centralizer identify write dim dim dim refer facts riemannian symmetric spaces lattices arithmetic groups discrete subgroup lie group said lattice quotient finite haar measure said uniform cocompact quotient compact otherwise borel density theorem see cor says group real points connected semisimple algebraic group defined lattice projects densely maximal compact factor instance connected semisimple algebraic group defined group lattice thus zariskidense group paradigm arithmetic group defined let semisimple lie group identity component lattice lattice said arithmetic connected algebraic group defined compact normal subgroups lie group isomorphism commensurable images recall two subgroups commensurable intersection finite index subgroups say lattice irreducible dense every closed normal subgroup margulis arithmeticity theorem see thm tells way irreducible lattices arithmetic theorem margulis arithmeticity theorem let group real points semisimple algebraic group defined irreducible lattice isogenous compact group arithmetic cyril lacoste observe real rank arithmeticity theorem applies every irreducible lattice group real rank least definition arithmeticity simplified cases connected centerfree compact factors compact subgroup definition must trivial moreover nonuniform irreducible compact subgroup needed either see cor assumptions also assume algebraic group centerfree case commensurator hypotheses non irreducible almost product irreducible lattices fact see prop direct decomposition commensurable irreducible lattice rational rank arithmetic group denoted rkq definition algebraic group definition arithmeticity rkq rkr note rkq cocompact see thm refer facts lattices arithmetic groups virtual cohomological dimension proper geometric dimension recall virtual cohomological dimension virtually discrete subgroup cohomological dimension subgroup finite index vcd max certain cocompact model compute virtual cohomological dimension vcd max hnc hnc denotes compactly supported cohomology see cor proper geometric dimension smallest possible dimension model group real points semisimple algebraic group maximal compact subgroup associated riemannian symmetric space uniform lattice model dimension vcd vcd mostly interested lattices also rule case adjoint group gad real rank fact following see cor proposition let algebraic group defined lattice rkr vcd dimension rigidity lattices semisimple lie groups case higher real rank recall margulis arithmeticity theorem arithmetic long irreducible compact however borel serre constructed bordification called bordification cocompact model see using bordification borel serre proved following theorem links virtual cohomological dimension rational rank arithmetic lattice theorem let semisimple lie group maximal compact subgroup arithmetic lattice vcd dim rkq particular vcd dim rkr moving note often article consider groups isogeny philosophy behind normal finite subgroups change dimensions indeed lemma let infinite discrete group finite normal subgroup vcd vcd proof first equality model follows easily model dimension lower reciprocally model also model inequality second equality suffices recall vcd subgroup finite index case subgroup finite index isomorphic refer facts virtual cohomological dimension geometric dimension bredon cohomology bredon cohomological dimension algebraic counterpart proper geometric dimension recall defined properties let discrete group family subgroups orbit category category whose objects left coset spaces morphisms maps contravariant functor cyril lacoste category category denoted objects natural transformations morphisms one show abelian category construct projective resolutions bredon cohomology coefficients denoted definition cohomology associated cochain complexes homof projective resolution functor maps objects morphisms identity map model augmented cellular chain complexes fixed points sets form projective even free resolution thus hnf homof bredon cohomological dimension proper actions denoted defined sup hnf said invariant viewed algebraic counterpart indeed meintrup proved following theorem theorem discrete group explain strategy prove vcd beginning material definitions recall group real points semisimple algebraic group lattice model note also finite subgroup dim dim denote family finite subgroups containing properly kernel xsing subspace consisting points whose stabilizer stricly larger kernel xsing also every fixed point set form finite order general computing easy task however admits cocompact model version formula bredon cohomological dimension fact get max hnc xsing dimension rigidity lattices semisimple lie groups fixed point set xsing subcomk plex consisting cells fixed finite subgroup strictly contains using caracterisations vcd one show see prop proposition let group real points semisimple algebraic group real rank least two lattice maximal compact subgroup associated riemannian symmetric space dim vcd every xsing surjective hvcd homomorphism hvcd vcd note authors assume connected hypothesis needed bordification still model connected see dim dim immediately following lemma corollary previous proposition see cor lemma notations dim vcd finite order non central vcd lemma key argument prove main theorem however case cases need following result see cor lemma notations suppose dim vcd every finite order element dim vcd distinct finite set finite order elements dim vcd cocompact lattice exists rational flat intersects exactly one point disjoint vcd refer facts bredon cohomology complex simple lie algebras section prove main theorem complex simple lie algebras proposition let complex simple lie algebra aut group automorphisms associated riemannian symmetric space assume rkr dim dim rkr cyril lacoste every finite order non central particular vcd every lattice recall adjoint group gad identity component aut agrees group inner automorphisms gad inn note gad centerfree dimension real rank associated riemannian symmetric spaces agree quotient aut group outer automorphism realized subgroup aut group aut product inn see recall also gad fixed point set inner automorphism use note aut involution induced involution gad still denoted group fixed points lie algebra fixed point set associated riemannian symmetric space particular dim compact proof proposition relies following lemmas lemma let semisimple lie algebra every element order let group automorphisms let associated riemannian symmetric space dim dim rkr dim dim rkr gad non trivial finite order involutions also dim dim rkr every finite order non central proof every element aut form gad aut know order hypothesis inner automorphism inclusion central gad dim dim dim rkr note central actually identity gad centerfree means words involution dim dim rkr assumption proved claim check first part use following dimension rigidity lattices semisimple lie groups lemma let group complex points semisimple connected algebraic group maximal compact subgroup suppose exists group isogenous subgroup irreducible symmetric space rkk rkh dim dim rkr satisfying dim dim rkr finite order non central dim dim rkr every finite order non central proof maximal compact subgroups conjugated conjugate since connected contained maximal torus since maximal tori conjugated conjugate one since subgroup rank maximal torus also maximal assume replacing conjugate element taking account group complex points reductive algebraic group complexification maximal compact subgroup dimension twice result get dim dim dim dim seen section similarly dim dim dim particular claim follows show dim dim rkr isogenous assume simplicity finite normal subgroup denote class finite dim dim particular central central suppose moment non central write dim dim dim assumption dim dim rkr finally dim dim rkr remains treat case central since symmetric space irreducible cyril lacoste follows identity component maximal connected lie group dim dim dim dim dim rkr assumption course proof proposition subgroups classical groups forms need following bounds dimension centralizers groups see section let finite order non central let finite order non central dim cso csu simplicity sometimes consider subgroup denote symmetric space convenience reader summarize following table informations need prove proposition exceptional lie algebras gad dim dim rkh rkk rkr gad table exceptional complex simple centerless lie groups gad maximal compact subgroups classical subgroups dimensions ranks ready launch proof proposition proof proposition second claim follows lemma vcd dim rkr every lattice theorem suffices prove first claim recall every complex simple lie algebra isomorphic either one classical algebras conditions ensure simplicity one exceptional ones prove proposition consider cases individually classical complex simple lie algebras let classical complex simple lie algebra aut lattice brief inspection table section obtain dimension rigidity lattices semisimple lie groups unless every outer automorphism order assume treating case later begin note get parts dim dim rkr gad non trivial finite order words first part condition lemma holds check second part make use classification local symmetric spaces instance assumption rank involution lie algebra isomorphic either lie algebra whose adjoint group compact one follows appear even associated riemannian symmetric space obtained taking quotient adjoint group maximal compact subgroup example case pso lie algebra dim maximal dim dim cases dim dim rkr get lemma first claim proposition holds cases similar leave details reader treat case lie algebra group complex outer automorphisms isomorphic symmetric group contains order element called triality see section interpretation triality terms octonions group real outer automorphisms isomorphic second factor corresponds complex conjugation consequently order outer automorphisms order compositions complex conjugation aut order composition inner automorphism order inclusions consider instead treat cases order order apply method classical simple lie algebras using classification local symmetric spaces remains treat case triality automorphism inverse case complex automorphism proceeding like proof lemma inner automorphism result follows non trivial belongs set complex automorphisms order result gray wolf see thm says equivalence relation conjugation inner automorphism cyril lacoste contains besides classes inner automorphisms four classes two others represented order automorphisms lie algebra fixed point set triality exceptional lie algebra fixed point set isomorphic lie algebra cases dim dim rkr proposition holds lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad connected algebraic group real rank complex dimension compact group maximal compact subgroup group contains subgroup isomorphic fixed involution extends conjugation giving split real form see section explicit description irreducible symmetric space dim rkr gad dim moreover finite order non central inequality dim rkr gad dim lemma first part lemma holds check second part list local symmetric spaces associated involution aut classification berger non compact cases isomorphic associated riemannian symmetric spaces psl psl cases dim dim rkr lemma proposition holds aut lie algebra proceed like previously simple lie algebra gad maximal compact subgroup rkr gad dim know exists subgroup isogenous rkh rkk irreducible symmetric space addition dim rkr gad dim dimension rigidity lattices semisimple lie groups finite order non central dim rkr gad dim inequality lemma first part lemma holds classification local symmetric spaces ones study isomorphic cases dim dim rkr lemma proposition holds aut lie algebra algebra outer automorphism group product two groups order gad maximal compact subgroup rkr gad dim know exists isogenous rkh rkk irreducible symmetric space following dim rkr gad dim finite order non central inequality dim rkr gad dim lemma first part lemma holds classification local symmetric spaces ones study isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple lie algebra order outer automorphism group adjoint group gad whose compact maximal subgroup rkr gad dim know exists isogenous rkh rkk irreducible symmetric space inequality dim rkr gad dim finite order non central inequality dim rkr gad dim lemma first part lemma holds cyril lacoste classification local symmetric spaces ones study isomorphic cases dim dim rkr lemma proposition holds aut lie algebra last exceptional lie algebra outer automorphism group order adjoint group gad maximal compact subgroup rkr gad dim know exists isogenous rkh rkk irreducible symmetric space also inequality dim rkr gad dim finite order non central inequality dim rkr gad dim lemma first part lemma holds classification local symmetric spaces ones study isomorphic cases dim dim rkr lemma proposition holds aut concluded proof real simple lie algebras section extend previous proposition real simple lie algebras real forms complex ones studied previous section ideas proof similar complex case although face additional difficulties maybe reader skip section first reading proposition let real simple lie algebra aut group automorphisms associated riemannian symmetric space vcd every lattice moreover dim dim rkr every finite order non central dimension rigidity lattices semisimple lie groups use lemma case exceptional real simple lie algebras use lemma establish inequalities form dim dim rkr adjoint group gad difficulty dimension gad anymore twice maximal compact subgroup extent bypass problem using following lemma lemma let connected lie group group real points semisimple algebraic group defined maximal compact subgroup suppose exists subgroup irreducible symmetric space whose compact maximal subgroup rank let associated riemannian symmetric spaces dim dim rkr dim dim rkr every finite order non central also dim dim rkr every finite order non central proof proof lemma conjugate maximal torus central irreducible identity component maximal connected lie subgroup follows thus riemannian symmetric spaces result follows assumption dim dim rkr suppose central dim dim dim dim result follows dim dim rkr assumption cases interest group classical group use following inequalities majorate dim see sections let finite order non central associated symmetric space dim cyril lacoste let finite order non central associated symmetric space dim let finite order non central associated symmetric space dim tables list exceptional real simple lie groups subgroups use informations need know proof proposition note simplicity compact maximal subgroups given isogeny gad table real exceptional simple centerless lie groups gad certain classical subgroups gad respective maximal compact subgroups gad dim dim rkk rkk rkr gad table notations table dimensions riemannian symmetric spaces associated gad together ranks gad dimension rigidity lattices semisimple lie groups ready prove proposition proof proposition recall first claim holds adjoint group real rank proposition isomorphic second claim also true aut finite order non central strict submanifold dim dim suppose rkr inspection table outer automorphisms section see every outer automorphism order except even proof proposition analysis classical real simple lie algebras start dealing classical lie algebras note rule use lemma want establish dim dim rkr dim dim rkr every adjoint group gad finite order non central every involution aut first condition holds computations sections using classification local symmetric spaces check second condition complex case instance either compact isomorphic one following last case appearing even lie algebra dim maximal dim dim rkr hence lemma lemma proposition holds aut cases similar lie algebras remaining classical cases even isomorphic every outer automorphism order case order outer automorphisms isomorphic already noted argument lattices involved cases face cyril lacoste problem exists aut dim dim rkr next goal characterize automorphisms happens lemma aut dim dim rkr equality odd conjugated psl pso even conjugated outer automorphism corresponding conjugation abusing notations still denote fixed point set automorphism corresponding conjugation conjugated even inner automorphism proof begin case use strategy proof lemma automorphism composition inner automorphism outer automorphism order order composition inner automorphism order inclusion replace suffice consider outer automorphisms order order inner automorphism gad trivial computations sections dim dim dim rkr equality last inequality odd conjugated first inequality equality proved claim trivial trivial involution use classification local symmetric spaces instance associated isotropy algebra last two cases appearing even theses cases dim dim rkr equality corresponds automorphism conjugated inner automorphism odd outer automorphism conjugation even remains consider case case group outer automorphism isomorphic symmetric group dimension rigidity lattices semisimple lie groups elements order outer automorphism order apply method using classification local symmetric spaces see dim dim rkr equality outer automorphism corresponding conjugation matrix conjugated order inner treat case trivial order complexification order complex automorphism case already treated previous section know fixed point set isomorphic compact real form isomorphic compact cases dim dim rkr proved claim let assume conclude vcd using lemma first condition said lemma holds lemma check second condition take maximal distinct want establish dim vcd first remark maximality contained one form conjugated let say dim vcd strict submanifold result holds conjugated refer computations proofs lemma lemma proof third point lemma lemma note authors consider inner automorphisms case odd argument also works without modifications kind even must enlighted argument gave fails second condition lemma hold anymore case conclusion lemma apply dim dim rkr either conjugation pso conjugation psl two conjugated conjugation pso corresponds psl outer automorphism whose fixed point set isomorphic psp however proof lemma concerning lattices adapted aut aut fact cyril lacoste adapted aut lattice psl conjugated lattice commensurable psl see classification arithmetic groups classical groups section result proposition holds real classical simple lie algebras lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma group gad contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma dimension rigidity lattices semisimple lie groups group gad contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma group gad contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification cyril lacoste berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma group gad contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma group gad contains subgroup isogenous whose maximal compact subgroup dimension rigidity lattices semisimple lie groups see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get results dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group trivial aut equal adjoint group gad group real points algebraic group real rank thus check first condition lemma use lemma group gad contains maximal compact subgroup isogenous also contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group order adjoint group cyril lacoste gad group real points algebraic group real rank group contains maximal compact subgroup isogenous use lemma check first condition lemma group gad contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma applies shows first condition lemma holds check second condition list local symmetric spaces associated involution aut classification berger non compact cases isomorphic cases dim dim rkr lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group trivial aut equal adjoint group gad group real points algebraic group real rank thus check first condition lemma use lemma group gad contains maximal compact subgroup isogenous also contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad dimension rigidity lattices semisimple lie groups lemma lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group trivial aut equal adjoint group gad group real points algebraic group real rank thus check first condition lemma use lemma group gad contains maximal compact subgroup isogenous also contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr gad riemannian symmetric spaces associated respectively gad moreover finite order non central get inequality dim dim rkr gad lemma lemma proposition holds aut lie algebra consider simple exceptional lie algebra outer automorphism group trivial group aut equals adjoint group gad group real points algebraic group real rank thus check conditions lemma group contains maximal compact subgroup isomorphic also contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr riemannian symmetric spaces associated respectively gad moreover finite order non central dim dim rkr cyril lacoste equality case last inequality happens conjugated matrix form cos sin sin cos assume form first block prove directly dim dim rkr first cso dimension study know element matrix corresponds recall group automorphisms non associative algebra split octonions dimension equiped quadratic form signature see section decompose direct sum vect quaternion algebra subgroup special orthogonal group preserves standard form signature fixes maximal compact subgroup corresponds stabilizer meaning elements automatically orthogonal isomorphic via isomorphism sends restriction consequently matrix consider corresponds matrix restriction element element entirely determined matrix indeed example deduce similarly find cos sin sin cos knowing completely described matrix corresponds remark cso dim dim dim cso finally dim dim rkr dimension rigidity lattices semisimple lie groups thus dim dim rkr every finite order non central lemma proposition holds lie algebra consider simple exceptional lie algebra outer automorphism group trivial group aut equals adjoint group gad group real points algebraic group real rank thus check conditions lemma group contains maximal compact subgroup isogenous also contains subgroup isogenous whose maximal compact subgroup see irreducible symmetric space furthermore dim dim rkr riemannian symmetric spaces associated respectively gad moreover finite order non central computations section dim dim rkr equality case last inequality happens conjugated matrix assuming form conjugation involutive automorphism gad quotient symmetric space know classification isogenous either cases inequality dim dim rkr holds fact isogenous thus dim dim rkr every finite order non central lemma proposition holds concludes proof cyril lacoste semisimple lie algebras prove section main theorem main theorem let semisimple lie algebra aut vcd every lattice recall semisimple isomorphic sum simple lie algebras adjoint group gad isomorphic product simple lie groups gad adjoint groups also assume gad compact factors indeed symmetric spaces change replace gad quotient compact factors automorphism composition permutation isomorphic factors diagonal automorphism form aut explain strategy used previous sections work point inequality dim dim rkr aut needed apply lemma hold even simplest cases fact gad dim dim rgr bypass problem improving lower bound vcd dim rkr used remember theorem vcd dim rkq long arithmetic want majorate rkq restrict study irreducible lattices recall context lattice irreducible dense every closed normal subgroup gad prove following result probably known experts proposition let semisimple lie algebra adjoint group rkq min rkr every irreducible arithmetic lattice aut proposition follow following theorem proved dimension rigidity lattices semisimple lie groups theorem let product noncompact connected simple lie groups following statements equivalent contains irreducible lattice isomorphic lie group qsimple algebraic group isotypic complexifications lie algebras isomorphic addition case contains cocompact non cocompact irreducible lattices recall algebraic group defined said qsimple contain connected normal subgroups defined prove proposition proof proposition remark rkq rkq moreover irreducible lattice gad irreducible lattice gad assume gad remember gad assumed none compact result trivial assume rkr gad arithmetic theorem exists lie group isomorphism gad theorem rkq rkq algebraic group isomorphic product isomorphic define centralizer product note canonical projection let maximal torus goal prove restriction finite kernel one hand ker normal subgroup defined zariski closure ker defined galois rationality criterion however non trivial normal subgroup ker finite may connected ker finite ker subgroup torus identity component torus seen group rational points finite ker finite image torus dimension see cor may projection defined projection defined rgq dim rgr rgr conclude proof main theorem cyril lacoste proof main theorem simple result follows propositions assume also assume adjoint group gad form gad simple adjoint group begin case irreducible rkq min rkr rkr arithmetic theorem remember gad also irreducible arithmetic lattice gad assume gad semisimple theorem gad trivial center assume centerfree case gad want use lemma let finite order non central form permutation isomorphic factors aut assume trivial identify aut resp aut corresponding automorphism gad resp key point remark automorphism trivial fact gad identify inner automorphism gad lies also recall seen proof proposition projections injective trivial gad leads trivial gad zariskidense gad trivial finally non trivial automorphism proposition also rkq min rkr note symmetric space associated associated propositions dim dim dim rkr dim rkr dim rkq assumed theorem dim vcd lemma gives result trivial fixed point set even smaller indeed assume simplicity isomorphic dimension rigidity lattices semisimple lie groups aut form aut aut fixed point set symmetric space associated fact elements fixed form fixed point dim dim dim rkr dim rkq vcd dim dim dim rkr argument works higher number summands decomposing permutation disjoint cycles finally reducible exists decomposition projections lattices see proof prop follows induction contained product irreducible lattices factors treat case irreducible lattices finite index also finite index vcd vcd note symmetric spaces associated theorem vcd vcd dim dim rkq rkq vcd vcd finally models model irreducible vcd vcd vcd inequality always true concludes proof main theorem end proof corollaries proof corollary case real rank treated proposition higher real rank know main theorem exists model dimension vcd also know bordification cocompact model using construction proof corollary one cocompact model dimension vcd models homotopy equivalent symmetric space also model conclude homotopy equivalent cocompact model dimension vcd cyril lacoste proof corollary prove aut finite index vcd common subgroup end prove essentially also lattice aut lattice aut assume first note also assume normal finite index subgroup acts conjugation mostow rigidity theorem see example thm automorphisms extended automorphisms gad morphism aut gad kernel morphism intersect since centerfree lattice thus gad finite index finite isomorphic lattice aut gad result follows main theorem lemma note mostow rigidity theorem apply group psl whose associated symmetric space hyperbolic plane case lattice either virtually free group virtually surface group first case group also virtually free exists model tree see vcd second case acts convergence group also fuchsian group see isomorphic cocompact lattice psl finally vcd references aramayona degrijse souto geometric dimension lattices classical simple lie groups aramayona proper geometric dimension mapping class group algebraic geometric toplogy ash classifying spaces arithmetic subgroups general linear groups duke math berger les espaces non compacts annales scientifiques ens borel introduction aux groupes hermann borel linear algebraic groups springer borel serre corners arithmetic groups brady leary nucinkis algebraic geometric dimensions group torsion london math soc brown cohomology groups graduate texts mathematics springer degrijse dimension invariants groups admitting cocompact model proper actions journal reine und angewandte mathematik crelle journal degrijse petrosyan geometric dimension groups family virtually cyclic subgroups topol degrijse souto dimension invariants outer automorphism groups dimension rigidity lattices semisimple lie groups djokovic real form complex semisimple lie algebras aequationes math gabai convergence groups fuchsian groups annals mathematics gray wolf homogeneous spaces defined lie groups automorphisms diff geom component group automorphism group simple lie algebra splitting corresponding short exact sequence journal lie theory classification structure theory lie algebras smooth section logos verlag berlin gmbh helgason differential geometry lie groups symmetric spaces american mathematical society integral novikov conjectures arithmetic groups containing torsion elements communications analysis geometry volume number johnson existence irreducible discrete subgroups isotypic lie groups classical type karass pietrowski solitar finite infinite cyclic extensions free group austral math soc knapp lie groups beyond introduction progress mathematics leary nucinkis groups type invent math leary petrosyan dimensions groups cocompact classifying spaces proper actions survey classifying spaces families subgroups infinite groups geometric combinatorial dynamical aspects springer meintrup universal space group actions compact isotropy proc conference geometry topology aarhus margulis discrete subgroups semisimple lie groups euler classes bredon cohomology groups restricted families finite torsion math witte morris introduction arithmetic groups arxiv onishchik vinberg lie groups algebraic groups springerverlag pettet spine spine enseignement mathematique pettet souto minimality retract geometry topology vogtmann automorphisms free groups outer space geometriae dedicata yokota exceptional lie groups irmar rennes address | 4 |
oct fractal sequences hilbert functions giuseppe favacchio abstract introduce fractal expansions sequences integers associated number used characterize generalize introducing numerical functions called fractal functions classify hilbert functions bigraded algebras using fractal functions introduction commutative algebra fields pure mathematics often happens easy numerical conditions describe deeper algebraic results significant example let standard graded polynomial ring let homogeneous ideal quotient ring called standard graded hilbert function defined dimk dimk dimk famous theorem due macaulay pointed stanley characterizes numerical functions hilbert functions standard graded functions homogeneous ideal introduce fundamental result need preparatory material let integers uniquely write expression called expansion integer expansion set hhii use convention example since expansion definition sequence integers called hii hii said maximal growth degree degree mathematics subject classification key words phrases hilbert function multigraded albegra numerical function version october giuseppe favacchio ready enunciate macaulay theorem characterizes hilbert function standard graded bounding growth degree next proof theorem details also discussed chapter represent sequence integers theorem macaulay let sequence integers following equivalent hilbert function standard graded therefore interesting find extension theorem case multigraded hilbert functions arise many contexts properties related hilbert function multigraded algebras currently studied see instance several examples generalization macaulay theorem rings open problem first answer given author hilbert functions bigraded algebra classified goal work generalize macaulay theorem bigraded algebras order reach purpose first section introduce list finite sequences called fractal expansion define coherent truncation vectors show objects strictly related indeed section show also characterize hilbert function standard graded furthermore show sequences used compute betti numbers lex ideal section extend results classify hilbert function bigraded algebras theorem computer program cocoa indispensable computations expansion fractal sequence section describe new approach classify hilbert functions standard graded algebras introduce sequence tuplas called coherent fractal growth study properties main result section theorem prove sequences behavior roughly speaking numerical sequence called fractal delete first occurrence number remains identical original property thus implies repeat process indefinitely contains infinitely many copy something like fractal behavior see formal definition properties instance one show sequence fractal indeed removing first occurrence number get sequence starting one introduce notation given positive integer denote tupla length consisting positive integers less equal written increasing order given finite infinite sequence positive integers construct new sequence named expansion denoted set symbol denotes associative operation concatenation two vectors construction recursively applied denote set positive integer also denote instance fractal sequences hilbert functions lemma let sequence positive integers proof statement true assume definition inductive hypothesis corollary let positive integer proof statement follows lemma since remark sequence fractal sequence given sequence denote entry finite denotes number entries sum use convention values infinite sequences positive integers instance throughout paper use convention finite sequence notation implies remark note finite sequence positive integers definition easily implies equality given positive integer define fractal expansion set element finite sequence positive integers following lemma compute number entries lemma let positive integer proof definition lemma therefore moreover remark next lemma introduces way decompose number sum binomial coefficients slight different macaulay decomposition use convention whenever lemma let positive integer written uniquely form proof order prove existence choose maximal kdd inductive hypothesis kdd kii since kdd moreover since follows giuseppe favacchio hence uniqueness follows induction trivial assume let kdd decomposition maximal integer kdd otherwise get kdd remark decomposition lemma different macaulay decomposition since always required moreover thus binomial coefficient could zero instance first binomial coefficients sum equal zero definition refer equation decomposition denote call numbers coefficients increasing sequence positive integers indeed construction moreover since next result explains name decomposition show entry need convention empty sequence theorem proof thus assume let kdd decomposition lemma since fractal decomposition kdd inductive hypothesis given lex iff following lemma crucial intent prove coefficients good behavior respect lex order lemma lex iff proof assertion trivial let let two integers fractal decomposition lex index hence easily otherwise vice versa let claim indeed get add contradicting done otherwise statement follows induction fractal sequences hilbert functions given two sequences say truncation next definition introduces main tool paper coherent fractal growths suitable truncations elements fractal expansion definition say coherent fractal growth truncation instance coherent fractal growth indeed one check elements truncation expansion previous one hand instance coherent fractal growth indeed truncation remark note coherent fractal growth consists first elements moreover length elements coherent fractal growth bounded indeed remark next part section prove bound remark equivalent binomial expansion order coherent fractal growth need following lemma uses equality lemma let coefficients coefficients ahdi ahdi proof decomposition definition get macaulay decomposition removing binomials equal since implies ahdi since done consider case write following decomposition thus representation macaulay decomposition remove binomials equal ahdi proof follows finite number steps iterating argument giuseppe favacchio following theorem main result section show length elements coherent fractal growth theorem let list truncations following equivalent coherent fractal growth proof order prove need show set take decomposition statement follows lemma assume truncation lemma since get denoted truncation therefore reiterating argument equation remark last sum lemma equal ahdi vice versa prove prove sequence truncation follows using argument indeed hypothesis bound remark know let check instance write sequence truncations length respectively get coherent fractal growth indeed definition sequence truncation previous one hand also check indeed coherent fractal growth need truncation length therefore maximal growth allowed fractal sequences hilbert functions fractal expansion homological invariants section introduced novel approach describe section show algebraic meaning coherent fractal growth directly relate sequences lex segment ideals homological invariants particular formula naturally applied case therefore fractal expansion used proposition compute betti numbers lex algebra let positive integers let coefficient see lemma definition associate monomial degree variables way xcd vice versa monomial xcd degree variables identifies remark immediate consequence lemma lex iff respect lex order induced therefore respect order greatest monomial degree set variables let standard graded polynomial ring set spanned monomials remark lex set monomials degree respect degree lexicographic order given coherent fractal growth set space spanned monomial theorem theorem following result holds proposition lex segment ideal given minimal free resolution lex segment ideal betti numbers computed formula see see also equation section theorem formula let lex segment ideal monomial minimal generator let denotes largest index divides let mkj number monomials mkj result written terms coherent fractal growth giuseppe favacchio proposition given coherent fractal growth wkj wkj number occurrence proof immediate consequence theorem theorem hilbert function bigraded algebra fractal functions let infinite field let polynomial ring indeterminates grading defined deg deg denotes set homogeneous elements degree moreover generated space monomials xinn ideal called bigraded ideal generated homogeneous elements respect grading bigraded algebra quotient bigraded ideal hilbert function bigraded algebra defined dimk dimk dimk set bihomogeneous elements degree work degree lexicographical order induced ordering recall definition bilex ideal introduced studied refer preliminaries results bilex ideals definition definition set monomials called bilex every monomial following conditions satisfied monomial ideal called bilex ideal generated space bilex set monomials every bilex ideals play crucial role study hilbert function bigraded algebras theorem theorem let bigraded ideal exists bilex ideal solved question characterize hilbert functions bigraded algebras introducing ferrers functions section generalize functions introducing fractal functions see definition prove theorem classify hilbert functions bigraded algebras need preparatory material denote set matrices size rows columns entries set given matrix mij denote named weight next definition introduces objects need section definition ferrers matrix size matrix mij fractal sequences hilbert functions mij set family ferrers matrices size next definition introduce expansions matrix definition let matrix size let vectors non negative integers denote element constructed repeating times row denote element constructed repeating times column remark expansions ferrers matrix set also ferres matrices take instance given define new matrix mij nij min mij nij say iff mij nij ready introduce fractal functions definition let numerical function say fractal function exists matrix mij mij matrices satisfy condition mij mij remark let numerical function satisfying condition definition one element matrix entries therefore fractal function remark definition fractal functions agrees definition indeed enough write partition matrix mij mhk mhk iff otherwise mhk case expansions given elements following denote set variables degree respectively next lemma useful purpose immediate consequence lemma giuseppe favacchio lemma xahdi shorten notation set order relate fractal functions hilbert functions bigraded algebras need introduce correspondence ferrers matrices monomials let mab denote set monomials proposition let mab mab bilex set monomials bidegree proof use lemma remark let element since mab get mub similar way follows let bilex set monomials bidegree denote matrix mab mab iff otherwise mab proposition let bilex set monomials bidegree proof follows using lemma remark indeed say mab entry implies thus mub analogously see mav proposition proposition together imply following result corollary one one correspondence bilex sets monomials degree elements ready prove main result paper theorem let numerical function following equivalent fractal function bilex ideal proof let fractal function let space spanned elements mij claim ideal prove claim enough show see lemma xahii definition theorem entry ahii matrix xahii furthermore similar way follows let bilex ideal set mij iij claim mij satisfy condition definition theorem enough show mij entry mij also ahii entry ahii set mhj claim immediate consequence fact lex ideal fractal sequences hilbert functions following question motivated argument section question bigraded betti numbers bilex ideal computed matrices references aramova crona negri bigeneric initial ideals diagonal subalgebras bigraded hilbert functions journal pure applied algebra jul bruns herzog rings cambridge university press jun cocoateam cocoa system computations commutative algebra available http eliahou kervaire minimal resolutions monomial ideals algebra favacchio hilbert function bigraded algebras journal commutative algebra press guardo van tuyl arithmetically sets points springerbriefs mathematics springer herzog hibi monomial ideals springer kimberling fractal sequences interspersions ars combinatoria macaulay properties enumeration theory modular systems proceedings london mathematical society jan peeva stillman open problems syzygies hilbert functions journal commutative algebra stanley hilbert functions graded algebras advances mathematics apr dipartimento matematica informatica viale doria catania italy address favacchio url | 0 |
noname manuscript inserted editor duncan pavliotis apr using perturbed underdamped langevin dynamics efficiently sample probability distributions received date accepted date abstract paper introduce analyse langevin samplers consist perturbations standard underdamped langevin dynamics perturbed dynamics invariant measure unperturbed dynamics show appropriate choices perturbations lead samplers improved properties least terms reducing asymptotic variance present detailed analysis new langevin sampler gaussian target distributions theoretical results supported numerical experiments target measures introduction motivation sampling probability measures spaces problem appears frequently applications computational statistical mechanics bayesian statistics particular faced problem computing expectations respect probability measure wish evaluate integrals form typical many applications particularly molecular dynamics bayesian inference density convenience denoted symbol known normalization constant furthermore dimension underlying space quite often large enough render deterministic quadrature schemes computationally infeasible standard approach approximating integrals markov chain monte carlo mcmc techniques markov process constructed ergodic respect probability measure defining average duncan school mathematical physical sciences university sussex falmer brighton united kingdom imperial college london department mathematics south kensington campus london england pavliotis imperial college london department mathematics south kensington campus london england ergodic theorem guarantees almost sure convergence average infinitely many markov purposes paper diffusion processes constructed way ergodic respect target distribution natural question choose ergodic diffusion process naturally choice dictated requirement computational cost approximately calculating minimized standard example given overdamped langevin dynamics defined unique strong solution following stochastic differential equation sde dxt log potential associated smooth positive density appropriate assumptions measure process ergodic fact reversible respect target distribution another example underdamped langevin dynamics given defined extended space phase space following pair coupled sdes dqt dpt dwt mass friction tensors respectively assumed symmetric positive definite matrices ergodic respect measure density respect lebesgue measure given exp normalization constant note marginal respect thus functions almost surely notice also dynamics restricted longer markovian thus interpreted giving instantaneous memory system facilitating efficient exploration state space higher order markovian models based finite dimensional markovian approximation generalized langevin equation also used lot freedom choosing dynamics see discussion section desirable choose diffusion process way provide good estimation performance estimator quantified various manners ultimate goal course choose dynamics well numerical discretization way computational cost estimator minimized given tolerance minimization computational cost consists three steps bias correction variance reduction choice appropriate discretization scheme latter step see section sec appropriate conditions potential shown converge equilibrium exponentially fast relative entropy one performance objective would choose process rate convergence maximised conditions potential guarantee exponential convergence equilibrium relative entropy found powerful technique proving exponentially fast convergence equilibrium used paper villani theory hypocoercivity case target measure gaussian overdamped underdamped dynamics become generalized processes processes entire spectrum generator equivalently operator computed analytically particular explicit formula gap obtained detailed analysis convergence equilibrium relative entropy stochastic differential equations linear drift generalized processes carried addition speeding convergence equilibrium reducing bias estimator one naturally also interested reducing asymptotic variance appropriate conditions target measure observable estimator satisfies central limit theorem clt asymptotic variance estimator asymptotic variance characterises quickly fluctuations around contract consequently another natural objective choose process small possible well known asymptotic variance expressed terms solution appropriate poisson equation generator dynamics techniques theory partial differential equations used order study problem minimizing asymptotic variance approach taken see also also used paper measures performance also considered example performance estimator quantified terms rate functional ensemble measure see also study nonasymptotic behaviour mcmc techniques including case overdamped langevin dynamics similar analyses carried various modifications particular interest riemannian manifold mcmc see discussion section nonreversible langevin samplers particular example general framework introduced mention preconditioned overdamped langevin dynamics presented dxt dwt paper behaviour well asymptotic variance corresponding estimator studied applied equilibrium sampling molecular dynamics variant standard underdamped langevin dynamics thought form preconditioning used practitioners molecular dynamics nonreversible overdamped langevin dynamics dxt dwt vector field satisfies ergodic reversible respect target measure choices vector field asymptotic behaviour process considered gaussian diffusions rate convergence covariance equilibrium quantified terms choice work extended case target densities consequently nonlinear sdes form problem constructing optimal nonreversible perturbation terms spectral gap gaussian target densities studied see also optimal nonreversible perturbations respect miniziming asymptotic variance studied works shown theory without taking account computational cost discretization dynamics nonreversible langevin sampler always outperforms reversible one terms converging faster target distribution well terms lower asymptotic variance emphasized two optimality criteria maximizing spectral gap minimizing asymptotic variance lead different choices nonreversible drift goal paper extend analysis presented introducing following modification standard underdamped langevin dynamics dqt dpt dwt constant strictly positive definite matrices scalar constants constant matrices demonstrated section process defined ergodic respect gibbs measure defined objective investigate use dynamics computing ergodic averages form end study long time behaviour using hypocoercivity techniques prove process converges exponentially fast equilibrium perturbed underdamped langevin process introduces number parameters addition mass friction tensors must tuned ensure process efficient sampler gaussian target densities derive estimates spectral gap asymptotic variance valid certain parameter regimes moreover certain classes observables able identify choices parameters lead optimal performance terms asymptotic variance results valid gaussian target densities advocate particular parameter choices also complex target densities demonstrate efficacy perform number numerical experiments complex multimodal distributions particular use langevin sampler order study problem diffusion bridge sampling rest paper organized follows section present background material langevin dynamics construct general classes langevin samplers introduce criteria assessing performance samplers section study qualitative properties perturbed underdamped langevin dynamics including exponentially fast convergence equilibrium overdamped limit section study detail performance langevin sampler case gaussian target distributions section introduce numerical scheme simulating perturbed dynamics present numerical experiments implementation proposed samplers problem diffusion bridge sampling section reserved conclusions suggestions work finally appendices contain proofs main results presented paper several technical results construction general langevin samplers background preliminaries section consider estimators form diffusion process given solution following sde dxt dwt drift coefficient diffusion coefficient smooth components standard brownian motion associated infinitesimal generator formally given denotes hessian function denotes frobenius inner product general nonnegative definite could possibly degenerate particular infinitesimal generator need uniformly elliptic ensure corresponding semigroup exhibits sufficient smoothing behaviour shall require process hypoelliptic sense condition holds irreducibility process immediate consequence existence strictly positive invariant distribution see suppose nonexplosive follows hypoellipticity assumption process possesses smooth transition density defined theorem associated strongly continuous markov semigroup defined suppose invariant respect target distribution bounded continuous functions extended positivity preserving contraction semigroup strongly continuous moreover infinitesimal generator corresponding given extension also denoted due hypoellipticity probability measure smooth positive density respect lebesgue measure slightly abusing notation denote density also let hilbert space integrable functions equipped inner product norm also make use sobolev space weak derivatives equipped norm general characterisation ergodic diffusions natural question conditions coefficients required ensure invariant respect distribution following result provides necessary sufficient condition diffusion process invariant respect given target distribution theorem consider diffusion process defined unique solution sde drift diffusion coefficient invariant respect log continuously differentiable vector field satisfying additionally exists matrix function case infinitesimal generator written proof result found similar versions characterisation found see also remark holds hypoelliptic follows immediately ergodic unique invariant distribution generally consider diffusions extended phase space dzt dwt standard brownian motion markov process generator consider dynamics ergodic respect various choices dynamics invariant indeed ergodic respect target distribution choosing immediately recover overdamped langevin dynamics choosing holds gives rise nonreversible overdamped equation defined satisfies conditions theorem ergodic respect particular choosing constant matrix obtain dxt dwt studied previous works given target density consider augmented target density given choosing positive definite symmetric matrices conditions theorem satisfied target density resulting dynamics determined underdamped langevin equation straightforward verify generator hypoelliptic sec thus ergodic generally consider augmented target density choose scalar constants constant matrices choice recover perturbed langevin dynamics straightforward check satisfies invariance condition thus theorem guarantees invariant respect similar fashion one introduce augmented target density clearly define dum resulting process given dqt dpt dum dwtm wtm independent brownian motions process ergodic unique invariant distribution appropriate conditions converges exponentially fast equilibrium relative entropy equation markovian representation generalised langevin equation form dqt dpt stationary gaussian process autocorrelation function let exp positive density choosing obtain dynamics dxt dyt immediately ergodic respect comparison criteria fixed observable natural measure accuracy estimator mean square error mse defined denotes expectation conditioned process starting instructive introduce decomposition var measures bias estimator measures variance fluctuations around mean speed convergence equilibrium process control bias term variance make claim precise suppose semigroup associated decays exponentially fast exist constants kpt remark holds estimate equivalent spectral gap allowing constant essential purposes though order treat nonreversible degenerate diffusion processes theory hypocoercivity outlined following lemma characterises decay bias terms proof found appendix lemma let unique solution denotes derivative respect suppose process ergodic respect markov semigroup satisfies study behaviour variance involves deriving central limit theorem additive functional discussed reduce problem proving poisson equation complications approach arise fact generator need symmetric uniformly elliptic following result summarises conditions poisson equation also provides formula asymptotic variance proof found appendix lemma let unique solution smooth drift diffusion coefficients corresponding infinitesimal generator hypoelliptic syppose ergodic respect moreover decays exponentially fast exists unique mean zero solution poisson equation asymptotic variance defined moreover holds clearly observables differ constant asymptotic variance sequel hence restrict attention observables satisfying simplifying expressions corresponding subspace denoted exponential decay estimate satisfied lemma shows invertible express asymptoptic variance let also remark proof lemma follows inverse given note constants appearing exponential decay estimate also control speed convergence zero indeed straightforward show satisfied solution satisfies lemmas would suggest choosing coefficients optimize constants would effective means improving performance estimator especially since improvement performance would uniform entire class observables possible indeed case however observed maximising speed convergence equilibrium delicate task leading order term typically sufficient focus specifically asymptotic variance study parameters sde chosen minimise study undertaken processes form perturbation underdamped langevin dynamics primary objective work compare performances perturbed underdamped langevin dynamics unperturbed dynamics according criteria outlined section find suitable choices matrices improve performance sampler begin investigations establishing ergodicity exponentially fast return equilibrium studying overdamped limit latter turns nonreversible therefore principle superior usual overdamped limit calculation provides motivation study proposed dynamics bulk work focus particular case target measure gaussian potential given symmetric positive definite precision matrix covariance matrix given case advocate following conditions choice parameters choices show large perturbation limit exists finite provide explicit expression see theorem expression derive algorithm finding optimal choices case quadratic observables see algorithm friction coefficient small certain mild nondegeneracy conditions prove adding small perturbation always decrease asymptotic variance observables form see theorem fact conjecture statement true arbitrary observables able prove dynamics used conjunction conditions proves especially effective observable antisymmetric invariant substitution significant antisymmetric part particular proposition show certain conditions spectrum antisymmetric observable holds numerical experiments analysis show departing significantly fact possibly decreases performance sampler stark contrast possible increase asymptotic variance perturbation reason seems practical use sampler reasonable estimate global covariance target distribution available case bayesian inverse problems diffusion bridge sampling target measure given respect gaussian prior demonstrate effectiveness approach applications taking prior gaussian covariance remark rem another modification suggested albeit simplifications dqt dpt dwt denoting antisymmetric matrix however change variables equations transform dqt since observable depends merely auxiliary estimator well associated convergence characteristics asymptotic variance speed convergence equilibrium invariant transformation therefore reduces underdamped langevin dynamics represent independent approach sampling suitable choices discussed section properties perturbed underdamped langevin dynamics section study properties perturbed underdamped dynamics first note generator given lham ltherm lpert decomposed perturbation lpert unperturbed operator split hamiltonian part lham thermostat part ltherm see lemma infinitesimal generator hypoelliptic proof see appendix immediate corollary result theorem perturbed underdamped langevin process ergodic unique invariant distribution given explained section exponential decay estimate crucial approach particular guarantees poisson equation therefore make following assumption potential required prove exponential decay assumption assume hessian bounded target measure satisfies poincare inequality exists constant holds sufficient conditions potential inequality holds criterion presented theorem assumption exist constants semigroup generated satisfies exponential decay proof see appendix remark proof uses machinery hypocoercivity developed however seems likely using framework assumption boundedness hessian substantially weakened overdamped limit section develop connection perturbed underdamped langevin dynamics nonreversible overdamped langevin dynamics analysis similar one presented section brief convenience section perform analysis torus assume consider following scaling dwt valid small momentum regime equivalently modifications obtained subsituting limit dynamics describes limit large friction rescaled time turns dynamics converges limiting sde dqt dwt following proposition makes statement precise proposition denote solution deterministic initial conditions qinit pinit solution initial condition qinit converges lim sup remark refined analysis possible get information rate convergence see limiting sde nonreversible due term also matrix general neither symmetric antisymmetric result together fact nonreversible perturbations overdamped langevin dynamics form improved performance properties motivates investigation dynamics remark limit described section respects invariant distribution sense limiting dynamics ergodic respect measure see check using notation instead refers generator associated operator indeed term vanishes antisymmetry therefore remains show matrix antisymmetric clearly first term symmetric furthermore turns equal symmetric part second term indeed invariant limiting dynamics sampling gaussian distribution section study detail performance langevin sampler gaussian target densities first considering case unit covariance particular study optimal choice parameters sampler exponential decay rate asymptotic variance extend results gaussian target densities arbitrary covariance matrices unit covariance small perturbations study dynamics given first consider simple case task sampling gaussian measure unit covariance assume perturbed way albeit posssibly different strengths using simplifications reduces linear system dqt dpt dynamics type write dxt denoting standard wiener process generator given consider quadratic observables form sym however worth recalling asymptotic variance depend also stress assumed independent extra degrees freedom merely auxiliary aim study associated asymptotic variance see equation particular dependence parameters dependence encoded function assuming fixed observable perturbation matrix section focus small perturbations behaviour function neighbourhood origin main theoretical tool poisson equation see proofs appendix anticipating forthcoming analysis let already state main result showing neighbourhood origin function favourable properties along diagonal note perturbation strengths first second line coincide theorem consider dynamics dqt dpt observable form least one conditions ker satisfied asymptotic unperturbed sampler local maximum independently long purely quadratic observables let start case following holds proposition function satisfies hess proof see appendix jkjk jkjk jkjk jkjk jkjk proposition shows unperturbed dynamics represents critical point independently choice general though hess positive negative eigenvalues particular implies unfortunate choice perturbations actually increase asymptotic variance dynamics contrast situation perturbed overdamped langevin dynamics nonreversible perturbation leads improvement asymptotic variance detailed furthermore nondiagonality hess hints fact interplay perturbations rather relative strengths crucial performance sampler consequently effect perturbations satisfactorily studied independently example assuming follows nonzero therefore case small perturbation increase asymptotic variance uniformly choices however turns possible construct improved sampler combining perturbations suitable way indeed function seen good properties along set compute hess jkjk jkjk jkjk jkjk jkjk last inequality follows jkjk inequalities proven appendix lemma last inequality strict consequently choosing perturbations magnitude assuring commute always leads smaller asymptotic variance independently choice state result following corrolary corollary consider dynamics dqt dpt quadratic observable asymptotic variance unperturbed sampler local maximum independently remark see section precisely example asymptotic variance constant function perturbation effect example let set corresponds small perturbation case get jkjk changes sign depending first term negative second positive whether perturbation improves performance sampler terms asymptotic variance therefore depends specifics observable perturbation case linear observables consider case following result proposition function satisfies hess proof see appendix let assume ker hence theorem shows small perturbation alone always results improvement asymptotic variance however combine perturbations effect depends sign negative different signs also sign big enough following section require end requirement satisfied summarizing results section observables form choosing equal perturbations sufficiently strong damping always leads improvement asymptotic variance conditions ker finally content theorem let illustrate results section plotting asymptotic variance function perturbation strength see figure making choices asymptotic variance computed according using appendix graphs confirm results summarized corollary concerning asymptotic variance neighbourhood unperturbed dynamics additionally give impression global behaviour larger values figures show asymptotic variance associated quadratic observable accordance corollary asymptotic variance local maximum zero perturbation case see figure increasing perturbation strength graph shows decays monotonically reaches limit limiting behaviour explored quadratic observable asymptotic variance asymptotic variance quadratic observable equal perturbations perturbation strength perturbation strength approximately equal perturbations linear observable quadratic observable asymptotic variance asymptotic variance perturbation strength perturbation strength equal perturbations sufficiently large friction opposing perturbations linear observable asymptotic variance perturbation strength equal perturbations small friction fig asymptotic variance linear quadratic observables depending relative perturbation friction strengths analytically section condition approximately satisfied figure numerical examples still exhibits decaying asymptotic variance neighbourhood critical point case however asymptotic variance diverges growing values perturbation perturbations opposed example possible certain observables unperturbed dynamics represents global minimum case observed figure figures observable considered damping sufficiently strong unperturbed dynamics local maximum asymptotic variance figure furthermore asymptotic variance approaches zero theoretical explanation see section graph figure shows assumption small dropped corollary even case though example shows decay asymptotic variance large values exponential decay rate let denote optimal exponential decay rate sup exists holds note positive theorem also define spectral bound generator inf proven semigroup considered section differentiable see proposition case see corollary known exponential decay rate spectral bound coincide whereas general holds section therefore analyse spectral properties generator particular leads intuition choosing equal perturbations crucial performance sampler see also proven spectrum given note depends drift matrix case spectrum computed explicitly lemma assume spectrum given proof compute use identity det det det det det det case arrows indicate movement spectrum perturbation strength increases case dynamics perturbed pdt arrows indicate movement eigenvalues increases fig effects perturbation spectra understood denote identity matrix appropriate dimension quantity zero together claim follows using formula figure show sketch spectrum case equal perturbations convenient choices course eigenvalue associated invariant measure since denotes operator arrows indicate movement eigenvalues perturbation increases accordance lemma clearly spectral bound affected perturbation note eigenvalues real axis stay invariant perturbation subspace associated turn crucial characterisation limiting asymptotic variance illustrate suboptimal properties perturbed dynamics perturbations equal plot spectrum drift matrix case dynamics perturbed term pdt see figure note full spectrum inferred spectrum consists degenerate eigenvalue increasing figure shows degenerate eigenvalue splits four eigenvalues two get closer imaginary axis increases leading smaller spectral bound therefore decrease speed convergence equilibrium figures give intuitive explanation perturbation strengths crucial unit covariance large perturbations previous subsection observed particular perturbation dqt dpt dwt perturbed langevin dynamics demonstrated improvement performance neighbourhood observable linear quadratic recall dynamics ergodic respect standard gaussian measure marginal respect following shall consider observables depend moreover assume without loss generality observable write assume canonical embedding infinitesimal generator given introduced notation lpert sequel adjoint operator denoted rest section make repeated use hermite polynomials invoking notation define spaces span induced scalar product gim space real hilbert space finite dimension dim following result theorem holds operators form quadratic drift diffusion matrices generator ergodic stochastic process see definition precise conditions ensure ergodicity generator sde given equations respectively following result provides orthogonal decomposition invariant subspaces operator theorem section following holds space decomposition mutually orthogonal subspaces invariant well semigroup spectrum following decomposition remark note ergodicity dynamics ker consists constant functions ker therefore decomposition ker first main result section expression asymptotic variance terms unperturbed operator perturbation proposition let particular associated asymptotic variance given remark proof preceding proposition show invertible prove proposition make use generator reversed perturbation momentum flip operator clearly properties auxiliary operators gathered following lemma lemma following holds generator symmetric respect perturbation skewadjoint operators commute perturbation satisfies commute following relation holds operators leave hermite spaces invariant remark claim lemma crucial approach rests heavily fact match proof lemma prove consider following decomposition lham ltherm partial integration straightforward see lham ltherm hltherm lham ltherm antisymmetric symmetric respectively furthermore immediately see lham ltherm ltherm ltherm note result holds general setting section infinitesimal generator claim follows noting flow vector field associated respect therefore generator strongly continuous unitary semigroup hence skewadjoint stone theorem prove use decomposition lham ltherm obtain lham ltherm first term gives second term gives since commutes terms clearly zero due antisymmetry symmetry hessian claim follows short calculation similar proof prove note fact commute follows property follows properties indeed required prove first notice form therefore leave spaces invariant theorem follows immediately also leaves spaces invariant fact leaves spaces invariant follows directly inspection proceed proof proposition proof proposition since potential quadratic assumption clearly holds thus lemma ensures invertible analogously particular asymptotic variance written due respresentation theorem inverses leave hermite spaces invariant prove claim proposition assumption includes case following calculations assume fixed combining statement lemma noting see restricted therefore following calculations justified third line used assumption fourth line properties equation since commute according lemma write restrictions using also since commute thus arrive formula since follows operator bounded therefore extend formula whole continuity using fact applying proposition analyse behaviour limit large perturbation strength end introduce orthogonal decomposition ker ker understood unbounded operator acting obtained smallest closed extension acting particular ker closed linear subspace let denote projection onto ker write stress dependence asymptotic variance perturbation strength following result shows large perturbations limiting asymptotic variance always smaller asymptotic variance unperturbed case furthermore limit given asymptotic variance projected observable unperturbed dynamics theorem let lim remark note fact limit exists finite nontrivial particular figures demonstrate often case condition satisfied remark projection onto ker understood terms figure indeed eigenvalues real axis highlighted diamonds affected perturbations let denote projection onto span eigenspaces eigenvalues limiting asymptotic variance given asymptotic variance associated unperturbed dynamics projection denote projection onto proof theorem note leave hermite spaces invariant restrictions spaces commute see lemma furthermore hermite spaces operators discrete lspectrum nonnegative adjoint exists orthogonal decomposition eigenspaces operator decomposition finer sense every subspace moreover eigenvalue associated subspace consequently formula written hfi let assume without loss generality ker particular clearly lim note ker ker due ker remains show see write note since consider observables depend ker since commutes follows leaves invariant therefore latter spaces orthogonal follows result follows theorem follows limit asymptotic variance decreased perturbation ker fact result also holds true observables ker affected perturbation lemma let ker proof ker follows immediately ker claim follows expression example recall case observables form sym section ker ker preceding lemma follows showing assumption theorem exclude nontrivial cases following result shows dynamics particularly effective antisymmetric observables least limit large perturbations proposition let satisfy assume ker furthermore assume eigenvalues rationally independent proof proposition claim would immediately follow ker according theorem seem easy prove directly instead make use hermite polynomials recall proof proposition invertible inverse leaves hermite spaces invariant consequently asymptotic variance observable written denotes orthogonal projection onto clear symmetric even antisymmetric odd therefore antisymmetric follows odd view spectrum written appropriate real constants depend odd indeed assume contrary expression zero follows rational independence clear sup denotes ball radius centered origin consequently spectral radius hence converge zero result follows remark idea preceding proof explained using figure remark since real eigenvalues correspond hermite polynomials even order antisymmetric observables orthogonal associated subspaces rational independence condition eigenvalues prevents cancellations would lead eigenvalues real axis following corollary gives version converse proposition provides intuition mechanics variance reduction achieved perturbation corollary let assume denotes ball centered radius proof according theorem implies write recall proof proposition leave hermite spaces invariant therefore ker particular implies turn shows ker using ker follows exists sequence taking subsequence necessary assume convergence pointwise everywhere sequence pointwise bounded function since antisymmetric gauss theorem yields denotes outward normal sphere quantity zero due orthogonality result follows lebesgue dominated convergence theorem optimal choices quadratic observables assume given rsym note constant term chosen objective choose way becomes small possible stress dependence choice introduce notation also denote orthogonal projection onto ker lemma zero variance limit linear observables assume lim proof according proposition show projection onto ker let thus prove ker second identity uses fact indeed since fredholm alternative exists define leading result follows lemma zero variance limit purely quadratic observables let consider decomposition traceless part trdk trdk corresponding decomposition observable following holds exists antisymmetric matrix algorithmic way see algorithm compute appropriate terms effected perturbation proof prove first claim according theorem sufficient show ker let consider function sym holds task finding antisymmetric matrix lim therefore accomplished constructing antisymmetric matrix exists symmetric matrix property given traceless matrix exists orthogonal matrix zero entries diagonal obtained algorithmic manner see example chapter section problem reader convenience summarised algorithm appendix assume thus matrix found choose real numbers set diag observe since symmetric antisymmetric short calculation shows obtain therefore thus define constructed way indeed satisfies second claim note ker since antisymmetry result follows lemma would like stress perturbation constructed previous lemma far unique due freedom choice proof however asymptotically optimal corollary setting lemma following holds min lim proof claim follows immediately since ker arbitrary antisymmetric shown therefore contribution trace part asymptotic variance reduced choice according lemma proof lemma constructive obtain following algorithm determining optimal perturbations quadratic observables algorithm given sym determine optimal antisymmetric perturbation follows set trdk find zero entries diagonal see appendix choose set otherwise set remark authors consider task finding optimal perturbations nonreversible overdamped langevin dynamics given gaussian case optimization problem turns equivalent one considered section indeed equation rephrased ker therefore algorithm generalization algorithm described section used without modifications find optimal perturbations overdamped langevin dynamics gaussians arbitrary covariance preconditioning section extend results preceding sections case target measure symmetric given gaussian arbitrary covariance rsym positive definite dynamics takes form dqt sqt dpt dwt key observation choices together transformation lead dynamics dqet pet qet dpet pet pet form obey condition note course antisymmetric clearly dynamics ergodic respect gaussian measure unit covariance following denoted connection asymptotic variances associated follows observable write qes therefore asymptotic variances satisfy denotes asymptotic variance process qet results previous sections generalise subject condition choices made formulate results general setting corollaries corollary consider dynamics dqt dpt dwt assume let observable form ker sym least one conditions satisfied asymptotic variance local maximum unperturbed sampler proof note form last equality defined theorem claim follows least one ker satisfied first equivalent kjs sjk conditions easily seen equivalent since nondegenerate second condition equivalent equivalent nondegeneracy corollary assume setting previous corollary denote orthogonal projection onto ker holds lim proof theorem implies lim denotes transformed system transformed observable projection onto ker according sufficient show however follows directly fact linear transformation maps ker bijectively onto ker let also reformulate algorithm case gaussian arbitrary covariance algorithm given sym assuming nondegenerate determine optimal perturbations follows set zero entries diagonal see appendix find choose set set put finally obtain following optimality result lemma corollary corollary let assume lim min optimal choices obtained using algorithm remark since section analysed case proportional able drop restriction optimality result analysis completely arbitrary perturbations subject future work remark choices introduced make perturbations considered article lead samplers perform well terms reducing asymptotic variance however adjusting mass friction matrices according target covariance way popular way preconditioning dynamics see instance particular molecular dynamics present argument preconditioning indeed beneficial terms convergence rate dynamics let first assume diagonal diag diag diag chosen diagonally well decouples sdes following form dqt dpt dwt let write processes dxt dwt section rate exponential decay equal min short calculation shows eigenvalues given therefore rate exponential decay maximal case given naturally reasonable choose way exponential rate leading restriction choosing small result fast convergence equilibrium also make dynamics quite stiff requiring small timestep discretisation scheme choice therefore need strike balance two competing effects constraint implies coordinate transformation preceding argument also applies diagonal basis course always chosen way numerical experiments show possible increase rate convergence equilibrium even choosing nondiagonally respect although small margin clearer understanding topic investigation numerical experiments diffusion bridge sampling numerical scheme section introduce splitting scheme simulating perturbed underdamped langevin dynamics given equation unpertubed case side decomposed parts according dwt refers part dynamics whereas stand momentum position updates respectively one particular splitting scheme proven efficient baoab scheme see references therein string letters refers order different parts integrated namely exp note many different discretisation schemes aboba oabao etc viable analytical numerical evidence shown particularly good properties compute ergodic averages respect observables motivated introduce following perturbed scheme introducing additional integration steps parts exp refers fourth order integration ode time remark linear therefore included opart without much computational overhead clearly discretisation schemes possible well instance one could use symplectic integrator ode noting hamiltonian type however since hamiltonian separable general symplectic integrator would implcit moreover could merged since commutes paper content scheme numerical experiments remark aformentioned schemes lead error approximation since invariant measure preserved exactly numerical scheme practice baoabscheme therefore accompanied metropolis step leading unbiased estimate albeit inflated variance case every rejection momentum variable flipped order keep correct invariant measure note perturbed scheme metropolized similar way flipping matrices every rejection using appropriate integrator dynamics given implementations idea subject ongoing work diffusion bridge sampling numerically test analytical results apply dynamics sample measure path space associated diffusion bridge specifically consider sde dxs dws potential obeying adequate growth smoothness conditions see section precise statements law solution sde conditioned events probability measure poses challenging important sampling problem especially multimodal setting used test case sampling probability measures high dimensions see example detailed introduction including applications see rigorous theoretical treatment papers case shown law conditioned process given gaussian measure mean zero precision operator sobolev space equipped appropriate boundary conditions general case understood perturbation thereof measure absolutely continuous respect derivative exp make choice possible without loss generality explained remark leading dirichlet boundary conditions precision operator furthermore choose discretise ensuing according equidistant way stespize functions grid determined values recalling dirichlet boundary conditions discretise functional gradient given discretised version given following discretised target measure form following consider case potential given set test algorithm adjust parameters according recommended choice gaussian case take precision operator gaussian target consider linear observable quadratic observable first experiment adjust perturbation via also observable according algorithm dynamics integrated using splitting scheme introduced section stepsize time interval furthermore choose initial conditions introduce time take estimator compute variance estimator realisations compare results different choices friction coefficient perturbation strength numerical experiments show perturbed dynamics generally outperform unperturbed dynamics independently choice linear quadratic observables one notable exception behaviour linear observable small friction see figure asymptotic variance initially increases small perturbation strengths however contradict analytical results since small perturbation results section generally require sufficiently big example theorem remark condition necessary theoretical results section advisable choice practice least experiment since figures clearly indicate optimal friction around interestingly problem choosing suitable value friction coefficient coefficient becomes mitigated introduction perturbation performance unperturbed sampler depends quite sensitively asymptotic variance perturbed dynamics lot stable respect variations regime growing values experiments confirm results section asymptotic variance approaches limit smaller asymptotic variance unperturbed dynamics final remark report finding performance sampler linear observable qualitatively independent coice long adjusted according result alignment propostion predicts good properties sampler antisymmetric observables contrast judicious choice critical quadratic observables particular applying algorithm significantly improves performance perturbed sampler comparison choosing arbitrarily standard deviation standard deviation linear observable linear observable friction perturbation strength fig standard deviation linear observable function friction perturbation strength quadratic observable standard deviation standard deviation quadratic observable perturbation strength friction fig standard deviation quadratic observable function friction perturbation strength outlook future work new family langevin samplers introduced paper new sde samplers consist perturbations underdamped langevin dynamics known ergodic respect canonical measure auxiliary drift terms equations position momentum added way perturbed family dynamics ergodic respect canonical distribution new langevin samplers studied detail gaussian target distributions shown using tools spectral theory differential operators appropriate choice perturbations equations position momentum improve performance langvin sampler least terms reducing asymptotic variance performance perturbed langevin sampler target densities tested numerically problem diffusion bridge sampling work presented paper improved extended several directions first rigorous analysis new family langevin samplers target densities needed analytical tools developed used starting point furthermore study actual computational cost minimization appropriate choice numerical scheme perturbations position momentum would interest practitioners addition analysis proposed samplers facilitated using tools symplectic differential geometry finally combining new langevin samplers existing variance reduction techniques zero variance mcmc manifold mcmc lead sampling schemes interest practitioners particular molecular dynamics simulations topics currently investigation acknowledgments supported epsrc grant supported epsrc roth departmental scholarship partially supported epsrc grants part work reported paper done visiting institut henri trimester program stochastic dynamics equilibrium hospitality institute organizers program greatly acknowledged estimates bias variance proof lemma suppose satisfies let initial distribution slightly abusing notation denote law given denotes since assumed bounded immediately obtain required proof lemma given fixed moreover cauchy sequence converging since closed follows moreover kpt since assume smooth coefficients smooth hypoelliptic implies thus apply formula obtain dwt one check conditions theorem hold particular following central limit theorem follows dwt theorem generator form follows first suppose stationary process follows generally suppose therefore holds case similarly proofs section proof lemma first note written sum squares form denotes standard euclidean basis unique positive definite square root matrix relevant commutators turn full rank follows span span since span span span follows span assumptions theorem hold overdamped limit following technical lemma required proof proposition lemma assume conditions proposition every exists sup proof using variation constants write second line dws compute sup sup sup dws sup sup dws sup sup dws clearly first term right hand side bounded second term observe sup sup since therefore bounded basic matrix exponential estimate suitable see bounded sup term bounded well third term bounded inequality similar argument one used second term applies cross terms bounded previous ones using inequality elementary fact sup sup sup result follows proof proposition equations written integral form first line multiplied matrix combining equations yields applying lemma gives desired result since equation differs integral version term vanishes limit hypocoercivity objective section prove perturbed dynamics converges equilibrium exponentially fast associated semigroup satisfies estimate using theory hypocoercivity outlined see also exposition section provide brief review theory hypocoercivity let real separable hilbert space consider two unbounded operators domains respectively antisymmetric let dense vectorspace operations authorised theory hypocoercivity concerned equations form associated semigroup generated let also introduce notation ker choices turns flat generator given therefore equation equation associated dynamics many situations practical interest operator coercive certain directions state space therefore exponential return equilibrium follow general case instance noise acts therefore relaxation concluded priori however intuitively speaking noise gets transported equations hamiltonian part dynamics theory hypocoercivity makes precise conditions interactions encoded iterated commutators exponential return equilibrium proved state main abstract theorem need following definitions definition coercivity let unbounded operator domain kernel assume exists another hilbert space continuously densely embedded operator said definition operator said relatively bounded respect operators intersection domains contained exists constant ktn holds proceed main result theory theorem theorem assume exists possibly unbounded operators relatively bounded respect relatively bounded respect relatively bounded respect positive constants furthermore assume exists kpt subspace associated norm kck ker remark property called hypocoercivity conditions theorem hold also get regularization result semigroup see theorem theorem assume setting notation theorem exists constant following holds kck khk proof theorem pove claim verifying conditions theorem recall quick calculation shows indeed ltherm make choice calculate commutator let set holds note furthermore compute choose recall assumption theorem choices assumptions theorem fulfilled indeed assumption holds trivially since relevant commutators zero assumption follows fact clearly bounded relative verify assumption let start case necessary show bounded relatively obvious since appearing controlled pderivatives appearing similar argument shows bounded relatively assumption bounded note crucial preceding arguments assume matrices full rank assumption trivially satisfied since equal identity remains show straightforward see kernel consists constant functions therefore ker hence amounts functional inequality equivalent since transformation bijective since coercivity boils inequality inequality assumption concludes proof hypocoercive decay estimate clearly therefore follows exist abstract equivalent sobolev norm constants kpt true automatically since stands array ker consists constant functions let lift estimate exist constant khkh kck therefore theorem implies holds possibly different constant let assume kpt kpt last inequality follows applying gathering constants results kpt note although assumed estimate also holds although possibly different constant since kpt bounded asymptotic variance linear quadratic observables gaussian case begin deriving formula asymptotic variance observables form following calculations sym note constant term chosen much along lines section since hessian bounded target measure gaussian assumption satisfied exponential decay semigroup follows theorem according lemma asymptotic variance given solution poisson equation recall generator later convenience defined sequel solve analytically first introduce notation slight abuse notation given uniqueness constant solution poisson equation linearity quadratic polynomial write rsym notice chosen symmetrical since depend antisymmetric part plugging ansatz yields trp trp cii denotes trace momentum component comparing different powers leads conditions cat trp note satisfied eventually existence uniqueness solution calculations asymptotic variance given proof proposition according asymptotic variance satisfies matrix solves cat given use notation abbreviations let first determine solution equation leads following system equations note equations equivalent taking transpose plugging yields adding together leads solving obtain taking setting yields notice computations similar derivation simple substitution equation solved employ similar strategy determine taking equation setting inserting leads equation solved note since clearly kjk way follows proving taking second setting yields employing notation noticing using calculate make ansatz leading equations equivalent taking transpose plugging combing gives jkj jkjk jkjk gives first part proceed way determine analogously get jkj solving resulting linear matrix system similar results jkj leading jkjk compute cross term take mixed derivative set arrive using see jkj jkj ensuing linear matrix system yields solution jkj leading jkjk completes proof proof proof proposition function satisfies recall following formula blockwise inversion matrices using schur complement provided invertible using obtain taking derivatives setting using fact leads desired result lemma following holds let jkjk furthermore equality holds proof show note function unique global maximum result follows note symmetric nonnegative definite write denoting real eigenvalues follows equality expand jkjk implies advertised claim orthogonal transformation tracefree symmetric matrices matrix zeros diagonal given symmetric matrix sym seek find orthogonal matrix zeros diagonal crucial step algorithms addressed various places literature see instance chapter section convenience reader following summarize algorithm similar one since symmetric exists orthogonal matrix diag algorithm proceeds iteratively orthogonally transforming matrix one first diagonal entry vanishing first two diagonal entries vanishing etc steps left matrix zeros diagonal starting assume otherwise proceed since exists opposing signs apply rotation transform first diagonal entry zero specifically let sin sin cos arctan procedure applied second diagonal entry leading matrix iterating process obtain udt zeros diagonal required orthogonal transformation references arnold erb sharp entropy decay hypocoercive equations linear drift alrachid mones ortner remarks preconditioning molecular dynamics arxiv preprint bass diffusions elliptic operators springer science business media bennett mass tensor molecular dynamics journal computational physics bakry gentil ledoux analysis geometry markov diffusion operators volume springer science business media bhatia matrix analysis volume graduate texts mathematics new york beskos pinski stuart hybrid monte carlo hilbert spaces stochastic process beskos roberts stuart voss mcmc methods diffusion bridges stoch beskos stuart mcmc methods sampling function space iciam international congress industrial applied mathematics pages eur math ceriotti bussi parrinello langevin equation colored noise constanttemperature molecular dynamics simulations physical review letters cattiaux guillin central limit theorems additive functionals ergodic markov diffusions processes alea duncan lelievre pavliotis variance reduction using nonreversible langevin samplers journal statistical physics dolbeault mouhot schmeiser hypocoercivity linear kinetic equations conserving mass trans amer math ethier kurtz markov processes wiley series probability mathematical statistics probability mathematical statistics john wiley sons new york characterization convergence engel nagel semigroups linear evolution equations volume graduate texts mathematics new york contributions brendle campiti hahn metafune nickel pallara perazzoli rhandi romanelli schnaubelt girolami calderhead riemann manifold langevin hamiltonian monte carlo methods stat soc ser stat discussion reply authors gelman carlin stern dunson vehtari rubin bayesian data analysis texts statistical science series crc press boca raton third edition hwang sheu accelerating gaussian diffusions ann appl hwang sheu accelerating diffusions ann appl roger horn charles johnson matrix analysis cambridge university press cambridge second edition hwang normand variance reduction diffusions stochastic process hairer stuart voss analysis spdes arising path sampling nonlinear case ann appl hairer stuart voss sampling conditioned diffusions trends stochastic analysis volume london math soc lecture note pages cambridge univ press cambridge hairer stuart voss wiberg analysis spdes arising path sampling gaussian case commun math joulin ollivier curvature concentration error estimates markov chain monte carlo ann kazakia orthogonal transformation trace free symmetric matrix one zero diagonal elements internat engrg kliemann recurrence invariant measures degenerate diffusions annals probability pages komorowski landim olla fluctuations markov processes volume grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences springer heidelberg time symmetry martingale approximation liu monte carlo strategies scientific computing springer science business media leimkuhler matthews molecular dynamics volume interdisciplinary applied mathematics springer cham deterministic stochastic numerical methods nier pavliotis optimal linear drift convergence equilibrium diffusion stat rousset stoltz free energy computations imperial college press london mathematical perspective stoltz partial differential equations stochastic methods molecular dynamics acta chen fox complete recipe stochastic gradient mcmc advances neural information processing systems pages metafune pallara priola spectrum operators spaces respect invariant measures funct markowich villani trend equilibrium equation interplay physics functional analysis mat contemp matthews weare leimkuhler ensemble preconditioning markov chain monte carlo simulation ottobre pavliotis asymptotic analysis generalized langevin equation nonlinearity ottobre pavliotis exponential return equilibrium hypoelliptic quadratic systems funct ottobre pavliotis remarks degenerate hypoelliptic operators math anal ottobre pillai pinski andrew stuart function space hmc algorithm second order langevin diffusion limit bernoulli pavliotis stochastic processes applications diffusion processes langevin equations volume springer pavliotis stuart white noise limits inertial particles random field multiscale model electronic pavliotis stuart analysis white noise limits stochastic systems two fast relaxation times multiscale model electronic spiliopoulos irreversible langevin samplers variance reduction large deviations approach nonlinearity spiliopoulos variance reduction irreversible langevin samplers diffusion graphs electron commun robert casella monte carlo statistical methods springer science business media villani hypocoercivity number american mathematical hwang chu attaining optimal gaussian diffusion acceleration stat | 10 |
divergences measures amadou diadie gane samb aug lerstad gaston berger lsa pierre marie curie france introduction paper deal divergence measures estimation using wavelet classical probability density functions let class two probability measures divergence measure application divergence measure necessarily symmetrical neither metric better explain concern let intoduce celebrated divergence measures based probability density functions let suppose respect measure usually lebesgues measure measure family renyi divergence measures indexed known name log family tsallis divergence measures indexed also known name finally divergence measure log dkl latter measure may interpreted limit case renyi family tsallis one letting well near tsallis family may seen derived fisrt order expansion based first order expansion logarithm function neigborhood unity although focusing aforementioned divergence measures attract attention reader exist quite number let cite example ones denamed alisilvey distance jeffrey divergence see chernoff divergence etc according dozen different divergence measures one find literature divergences measures consistency bands coming back divergence measures interest want highlight important applications indeed divergence proven useful applications let cite may similarity measure image registration multimedia classification see also applicable loss function evaluating optimizing performance density estimation methods see estimation divergence samples drawn unknown distributions gauges distance distributions divergence estimates used clustering particular deciding whether samples come distribution comparing estimate threshold divergence estimates also used determine sample sizes required achieve given performance levels hypothesis testing divergence gauges differently two random variables distributed provides useful measure discrepancy distributions frame information theory key role divergence well known growing interest applying divergence various fields science engineering purpose estimation classification etc divergence also plays central role frame large deviations results including asymptotic rate decrease error probability binary hypothesis testing problems reader may find applications descriptions following papers may see two kinds problems encounter dealing objects first divergence measures may finite whole support distributions two remarks apply many divergence measures problems avoided boundedness assumption singh krishnamurthy case respect measure authors suppose exist two finite numbers quantities example finite expressions sures also finite follow authors adopting assumption throughout paper divergence measures tests divergence measures may applied two statistical problems among others first may used problem like let sample unkown probability distribution want test hypothesis equal known fixed probability example jager proposed uniform probability distribution divergences measures consistency bands theoritically want test null hypothesis versus use general test statistic test statistic form answer question estimating divergence measure estimator based sequences empirical probabilities establishing asymptotic theory necessary conclude divergence measures comparison tool problem comparison tool two distributions may two samples wonder whether come probability measure also may two different cases first two independent samples respectively random variable empirical divergence natural estimator depends statistical test data may aslo paired measurements case case testing equality margins based empirical probabilities couple related work krisnamurthy singh poczos studied mainly independent case two distributions comparaison used divergence measures based probability density functions concentrated reyni singh poczos proposed divergence estimators achieve parametric convergence rate depends smoothness densities holder class smothness showed min singh poczos krishnamurthy proposed divergence estimators achieve parametric convergence rate weaker conditions given divergences measures consistency bands krishnamurthy proposed three estimators divergence measures plugging linear lin quadratic one showed lin quadratic estimator poczos jeff considered two samples necessarily size used neighbour knn based density estimators showed reyni estimator est asymptotically unbiaised lim consistent norm lim conditions densities liu worked densities holder classes whereas work applies densities bessov class case asymptotic distributions estimators currently unknown view case rely available data using sample size may lead reduction apply method one take minimum two sizes loose information suggest come back general case study asymptotics based samples fitting approach may cite hamza used modern techniques mason consistency bounds kernel estimators authors hamza current version work address existence problem divergence measures seize opportunity papers correct also fitting case using measures symmetry deal estimation decided cases better divergences measures consistency bands paired case aware works yet approach important addressed paper devoted general study estimation measures three level fitting independent comparison paired comparison use empirical estimations density functions parzen estimator wavelet ones main novelty resides wavelet approach using parzen statistics main tool modern techniques mason consistency bounds kernel wavelet approch mainly back nickl paper since tools using level developpement results parzen scheme use distributions pertaining wavelet frame set univariate distributions give precise account wavelet theory applications statistical estimation using hardle paper organized follows section describe use density estimations parzen wavelets well statements main hypothesis wavelets broader account given appendix section deal fitting questions section devoted independent distribution comparison finally section deal margins distribution comparison sections establish strong efficiency central limit theorems standards assumptions densities scale function wavelet kernel formalized sequel establish following properties define linear wavelet density estimators establish consistency density estimators establish asymptotic consistency showing theorem prove estimator asymptotically normal theorem derive also prove lastly prove organization paper plan results going establish general results consistency asymptotic normality next results particular divergences measures follow corollaries divergences measures consistency bands general conditions let functional two densities functions satisfying assumption form function class adopt following notations respect partial derivatives require following general conditions following integrals finite measurable sequences functions uniformly converging zero max sup remark results may result dominated convergence theorem monotone convergence theorem limit theorems may either express conditions results hold true general function choose state final results next check particular cases reside real interests general results concern estimations one sample see theorem two samples problems see theorem case use linear wavelet estimators denoted defined mainly use results nickl conditions define kfn kgn stands divergences measures consistency bands wavelet setting wavelet setting involves two functions orthonormal basis associated kernel function wavelets defined mesurable function define assuming following assumption bounded compact support either father wavelet weak derivatives order vanishing moments assumption bounded vanishes assumption resolution level assumption one log log log log sup conditions allow use results definition given two independent samples size respectively random variable absolute continuous law straighforward wavelets estimators defined kjn kjn sequel suppose densities belong besov space see khks sup sup sup function wavelet coefficients spaces spaces contain classical spaces given definitions describe use wavelet approach divergences measures consistency bands remarquable theorem densities belong satisfies satisfy log log almost surely converge zero rate order establish asymptotic normality divergences estimators need recall facts kernels wavelets theorem provides asymptotic normality necessary setting asymptotic normality divergence measure provided finitness kjn theorem assumption kjn kjn symbol denotes convergence law denotes expection measurable function proof theorem postpooned subsection main results sequel functional two densities functions satisfying assumption defined function class define functions constants suppose finites divergences measures consistency bands one side estimation suppose either sample unknown known want study limit behavior sample unknown known want study limit behavior theorem assumption consistency lim sup lim sup asymptotic normality kjn kjn kjn kjn two sides estimation suppose two samples respectively unknown want study limit behavior theorem assumption lim sup proofs given section right going apply results particular divergence measures estimations check conditions divergences measures consistency bands particular cases results divergences measures follow corollaries since particular cases ensure general conditions begin giving main assumption densities assumption exists compact containing supports densities throughout subsection use assumption integrales constantes integrables use dominate convergence theorem based remark meaning assumption conditions satisfied following divergence measures functions updated cases way since depend bessov functions randoms variables case hellinger integral order start hellinger integral order defined one let corollary one sample estimation consistency lim sup lim sup asymptotic normality kjn kjn kjn kjn divergences measures consistency bands whith corollary two side estimation consistency lim sup asymptotic normality following handling hellinger integral order conditions satisfied assumption case tsallis divergence measure corollary one side estimation consistency lim sup lim sup asymptotic normality kjn kjn whith corollary two sides estimation conditions theorem consistency lim sup asymptotic normality divergences measures consistency bands case reyni divergence measure log corollary one side estimation consistency asymptotic normality corollary two sides estimation consistency asymptotic normality proofs corollaries postponed case divergence measure dkl case log one log log thus log log assumption conditions satisfied measurables sequences functions uniformly converging zero divergences measures consistency bands corollary one side estimation consistency lim sup lim sup dkl dkl asymptotic normality dkl dkl dkl dkl kjn kjn kjn kjn log corollary two sides estimation consistency lim sup dkl asymptotic normality dkl dkl case divergence measure proceed different route one also divergences measures consistency bands let deduce let give theorem one side estimation consistency lim sup lim sup asymptotic normality kjn kjn kjn kjn theorem two sides estimation consistency lim sup normality applications statistics tests divergence measures may applied two statistical problems among others first may used problem like let sample unkown probability density function want test hypothesis equal known fixed probability density function want test versus unctions besov space fixed test pointwise null hypothesis versus divergences measures consistency bands using particular divergences measure like divergences proposed test statistics form particular cases consider log limit distribution null hypothesi testing null hypothesis propose tests statistics using tsallis renyi kulback divergence measures suppose null hypothesis holds known follows previous work kjn kjn kjn kjn renya divergence measure kjn kjn whith dkl dkl log kjn kjn confidence bands want obtain proofs rest section proceeds follows establish proof theorem devoted proof theorem subsection present proof theorem subsection devoted proofs corollaries divergences measures consistency bands proof theorem proof suppose assumptions satisfied start showing first sum empirical process based sample applied function kjn random variable definition kjn kjn write kjn kjn therefore kjn kjn kjn kjn kjn one kjn kjn kjn boundedness support compactness give since vanishes bounded finally kjn usual gives kjn kjn kjn theorem proved show step use fact theorem one kjn kkjn divergences measures consistency bands therefore note moment condition theorem quoted equivallent assumption see page justify use context finally conclude defined proof theorem proof following development going use systematically mean value theorem bivariate dimensional real functions depending always satisfying ease notation introduce two following notations used sequel let max recall start one side asymptotic estimation one application function one exists application function write hence divergences measures consistency bands therefore lim sup yield prove let prove swapping roles one obtains one obtains give prove focus asymptotic normality one sample estimation going back theorem kjn kjn provided thus proved show one nan let show chebyshev inequality one nan divergences measures consistency bands theorem gine one log log use fact thus finally nan since log log log log finally using one yields ends proof going back one theorem one kjn kjn since provided similarly nbn nbn previously give finally shows holds completes proof theorem divergences measures consistency bands proof theorem proof proceed techniques led prove begin breaking two terms already handled application function one exists second application function get therefore thus give lim sup proves desired result remains prove divergences measures consistency bands going back one theorem one since provided one previously one ncn ncn ncn conditions one finally shows holds completes proof theorem proofs corollaries proof corollary one log log using taylor expansion log follows almost surely log log log proves desired result proof similar previous proof prove recall divergences measures consistency bands taylor expansion log follows almost surely log log log therefore proved similarly finally ends proof corollary proof corollary proof start consistency previous work one gets log log hence proves let find asymptotic normality one gets hence obtain log log therefore references topsoe inequalities information divergence related measures discrimination ieee trans inf theory vol evren applications jeffreys divergences multinomial populations cichocki amari families flexible robust measures similarities entropy moreno vasconcelos divergence based kernel svm classification multimedia applications laboratories cambridge tech jager jon wellner goodness fit tests via annals statistics divergences measures consistency bands hall loss density estimation ann vol bhattacharya efficient estimation shift parameter grouped data annals mathematical statistics vol berlinet devroye gyorfi asymptotic normality error density estimation statistics vol liu shum boosting proc ieee computer society conference computer vision pattern recognition vol june kullback leibler information sufficiency ann math stat fukunaga hayes reduced parzen classifier ieee trans pattern anal mach intell cardoso infomax maximum likelihood blind source separation ieee signal process lett cardoso blind signal separation statistical principles proc ieee ojala pietik ainen harwood comparative study texture measures classification based featured distributions pattern recogn hastie tibshirani classification pairwise coupling ann stat buccigrossi simoncelli image compression via joint statistical characterization wavelet domain ieee trans image process moreno vasconcelos divergence based kernel svm classification multimedia applications adv neural inform process syst mackay information theory inference learning algorithms cambridge university press cambridge cover thomas elements information theory wiley darbellay vajda estimation information adaptive partitioning observation space ieee trans inf theory vol may common independent component analysis new concept signal vol tsitsiklis decentralized detection advances statistical signal processing new york jai akshay krishnamurthy kirthevasan kandasamy barnaba poczos larry wasserman nonparametric estimation divergence selcuk university natural applied science liu lafferty wasserman exponential concentration inequality shashank singh barnabas poczos generalized exponential concentration inequality divergence estimation carnegie mellon university forbes pittsburgh usa poczos jeff estimation liu lafferty wasserman exponential concentration inequality mutual information estimation neural information processing systems nips hamza ngom deme mendy estimators divergence measures strong uniform consistency xuanlong martin michael estimating divergence functionals likelihood ratio convex risk minimization ieee transastions information theory nguyen wainwright jordan estimating divergence functionals likelihood ratio convex risk minimization ieee trans inform theory vol einmahl mason empirical process approach uniform consistency function estimators theoret einmahl mason uniform bandwidth consistency function estimators ann mason uniform bandwidth estimation integral functionals density function scand cai kulkarni verdu universal estimation entropye divergence via block sorting proc ieee int symp information theory lausanne switzerland divergences measures consistency bands cai kulkarni verdu universal divergence estimation sources ieee trans inf theory submitted publication ziv merhav measure relative entropy individual sequences application universal classification ieee trans vol jul wolfgang hardie gerard kerkyacharian dominique picard alexander tsybakov wavelets approximation statistical applications hero michel estimation nyi information divergence via pruned minimal spanning trees ieee workshop higher order statistics caesaria israel jun moon hero iii ensemble estimation multivariate ieee international symposium information theory berlinet gyorfi nes asymptotic normality relative entropy multivariate density estimation publications institut statistique paris vol bickel rosenblatt global measures deviations density function estimates annals statistics kevin multivariate estimation confidence nickl uniform limit theorem wavelet density estimators annals probability vol doi | 10 |
jan deep learning reconstruction dual energy baggage scanner yoseob han jingu kang jong chul kaist daejeon korea email hanyoseob gemss medical seongnam korea email kaist daejeon korea email homeland transportation security applications explosive detection system eds widely used limitations recognizing shape hidden objects among various types computed tomography systems address issue paper interested stationary using fixed sources detectors however due limited number projection views analytic reconstruction algorithms produce severe streaking artifacts inspired recent success deep learning approach sparse view reconstruction propose novel image sinogram domain deep learning architecture reconstruction sparse view measurement algorithm tested real data prototype dual energy stationary eds baggage scanner developed gemss medical systems korea confirms superior reconstruction performance existing approaches index explosive detection system eds sparseview convolutional neural network cnn ntroduction homeland aviation security applications increasing demand eds system carryon baggage screening produce accurate object structure segmentation threat detection often possible system captures projection views one two angular directions currently two types eds systems stationary eds largely medical baggage screening carried continuously often difficult continuously screen bags possible mechanical overloading gantry system hand stationary eds system uses fixed sources detectors making system suitable routine baggage inspection example fig shows source detector geometry prototype stationary system developed gemss medical systems korea shown fig nine pairs source dual energy detector opposite direction distributed angular interval seamless screening without stopping convey belt pair source detectors arranged along shown fig different projection view data collected baggages moves continuously conveyor belt fan beam projection data obtained rebinning measurement data type stationary system suitable eds fig source positions prototype view dual energy eds direction direction respectively applications require rotating gantry projection views difficult use conventional filtered backprojection fbp algorithm due severe streaking artifacts therefore advanced reconstruction algorithms fast reconstruction time required eds iterative reconstruction mbir total variation penalty extensively investigated inspired recent success deep learning approach sparse view limited angle outperform classical mbir approach paper aims developing deep learning approach sparse view eds however neural network training using retrospective angular subsampling existing works possible prototype system since data real world sparse view eds therefore propose novel deep learning approach composed image domain sinogram domain learning compensate imperfect label data heory problem formulation recall forward model sparse view eds system represented denotes projection operator volume image domain sinogram data denoting detector projection angle direction fig sinogram interpolation flow proposed method final reconstruction obtained applying fbp interpolated sinogram data conveyor belt travel respectively see fig coordinate systems denotes view sampling operator measured angle set refers measured sinogram data projection view data use notation denotes specific view main technical issue sparse view reconstruction solution specifically exists null spacce leads infinite number feasible solutions avoid solution constrained form penalized mbir formulated klf subject refers linear operator denotes norm case penalty corresponds derivative uniqueness guaranteed denotes null space operator instead designing linear operator common null space zero design frame dual shrinkage operator specifically sparse view reconstruction algorithm finds unknown satisfy data fidelity frame constraints word directly removes null space component constraint use training neural network image domain cnn satisfies denotes images available training data defining since right inverse unique due existence null space thus show feasible solution since training data therefore neural network training problem satisfy equivalently represented min image regularization also active field research image denoising inpainting etc one important contributions deep convolutional framelet theory correspond encoder decoder structure convolutional neural network cnn respectively shrinkage operator emerges controlling number filter channels nonlinearities specifically convolutional neural network designed derivation image projection domain cnns denotes training data set composed image sparse view projection since representative right inverse sparse view projection inverse radon transform zero padding missing view implemented using standard fbp algorithm fact main theoretical ground success image domain cnn data available moreover rebinning makes problem separable slices use fbp slice shown fig however main technical difficulties eds system image one could use physical phantoms atomic number form set images data set may different real bags need new method account lack neural network training thus overcome lack groundtruth data approximate label images generated using mbir penalty using mbir reconstruction label data image domain network trained learn mapping image mbir reconstruction domain one downside approach network training optimal since label data image thus generated sinogram data denoised volume may biased thus impose additional frame constraint sinogram data addition measured angle sinogram domain cnn denotes sinogram data measured leads following network training min rqi specifically shown fig sinogram data generated domain applying forward projection operator along views stacking image domain network output multiple slices form reconstruction volume domain next sinogram domain network trained learn mapping synthetic sinogram data real projection data domain since real projection data available views sinogram network training performed using synthetic real projection data measured projection views optimization problems solved sequentially simultaneously paper adopt sequential optimization approach simplicity neural networks trained inference done simply obtaining volume images view projection data fbp algorithm fed obtain denoised volume data applying projection operator generate projection view data domain fed obtain denoised sinogram data angle final reconstruction obtained applying fbp algorithms one could use using additional denosing algorithmic flow illustrated fig iii ethods real eds data acquisition collected eds data using prototype stationary view dual energy system developed gemss medical systems korea shown fig distance source detector dsd distance source fig cnn architecture image singoram domain networks object dso respectively number detector pitch region interest roi pixel size detectors collect low high energy respectively collect sets projection data prototype eds baggage scanner among sets dataset set realistic bags set used training phase validation performed two one set used test network architecture training fig illustrates modified structure image domain sinogram domain networks account image sinogram data input network two channel image sinogram data proposed network consists convolution layer batch normalization rectified linear unit relu contracting path connection concatenation detail parameters illustrated shown fig proposed networks trained stochastic gradient descent sgd regularization parameter learning rate set reduced step step epoch number epoch batch size patch size image projection data respectively network implemented using matconvnet toolbox matlab environment mathworks natick central processing unit cpu graphic processing unit gpu specification cpu ghz gtx gpu respectively xperimental esults evaluate performance proposed method perform image reconstruction real eds prototype system fig illustrates image reconstruction results bag using various methods fbp mbir penalty image domain cnn proposed method fbp reconstruction results suffered severe streaking artifacts difficult see threats tomographic reconstruction rendering mbir image domain cnn slight better reconstruction quality detailed structures fully recovered several objects detected indicated red arrow fig moreover rendering results fig fig domain sinogram data measurement fbp mbir image cnn proposed method number written images nmse value yellow red arrows indicate grenade knife respectively high quality images using real data prototype eds system demonstrated proposed method outperforms existing algorithms delivering high quality three reconstruction threat detection acknowledgment fig reconstruction results various methods correctly identify shape grenade knife well frame bag possible using methods image domain perform quantitative evaluation using normalized mean squares error nmse sinogram domain specifically obtaining final reconstruction perform forward projection generate sinogram data measured projection view calculated normalized mean square errors table showed proposed method provides accurate sinogram data compared methods moreover projection data fig showed projection data proposed method much closer measurement data table nmse value comparison various methods energy level fbp image cnn kvp kvp onclusion paper proposed novel deep learning reconstruction algorithm prototype dual energy eds baggage scanner even though number projection view sufficient high equality reconstruction method learns relationships tomographic slices domain well projections domain image sinogram data successively refined obtain work supported korea agency infrastructure technology advancement grant number eferences sagar mandava david coccarelli joel greenberg michael gehm amit ashok ali bilgin image reconstruction baggage scanning anomaly detection imaging adix international society optics photonics vol sherman kisner eri haneda charles bouman sondre skatter mikhail kourinny simon bedford limited view angle iterative reconstruction computational imaging vol yoseop han jaejoon yoo jong chul deep residual learning compressed sensing reconstruction via persistent homology analysis arxiv preprint yoseob han jong chul framing via deep convolutional framelets application arxiv preprint kyong hwan jin michael mccann emmanuel froustey michael unser deep convolutional neural network inverse problems imaging ieee transactions image processing vol jawook jong chul wavelet domain residual learning reconstruction arxiv preprint cai raymond chan zuowei shen image inpainting algorithm applied computational harmonic analysis vol jong chul seob han eunjoo cha deep convolutional framelets general deep learning framework inverse problems arxiv preprint olaf ronneberger philipp fischer thomas brox convolutional networks biomedical image segmentation international conference medical image computing intervention springer alex krizhevsky ilya sutskever geoffrey hinton imagenet classification deep convolutional neural networks advances neural information processing systems andrea vedaldi karel lenc matconvnet convolutional neural networks matlab proceedings acm international conference multimedia acm | 2 |
jun adaptive nonparametric drift estimation diffusion processes using expansions frank van der meulen moritz jan van june abstract consider problem nonparametric estimation drift continuously observed diffusion periodic drift motivated computational considerations van der meulen defined prior drift randomly truncated randomly scaled series expansion gaussian coefficients study behaviour posterior obtained prior frequentist asymptotic point view true data generating drift smooth proved posterior adaptive posterior contraction rates optimal log factor contraction rates derived well introduction assume continuous time observations diffusion process defined weak solution stochastic differential equation sde dwt brownian motion drift assumed measurable function real line square integrable assumed periodicity implies alternatively view process diffusion circle model used dynamic modelling angles see instance pokern hindriks interested nonparametric adaptive estimation drift problem recently studied multiple authors spokoiny proposed locally linear smoother bandwidth choice rate adaptive respect optimal delft mekelweg delft netherlands address leiden university niels bohrweg leiden netherlands address math vries institute mathematics science park amsterdam netherlands address log factors interestingly result require ergodicity dalalyan kutoyants dalalyan consider ergodic diffusions construct estimators asymptotically minimax adaptive sobolev smoothness drift results extended multidimensional case strauch paper focus bayesian nonparametric estimation paradigm become increasingly popular past two decades overview advances bayesian nonparametric estimation diffusion processes given van zanten bayesian approach requires specification prior ideally prior drift chosen drawing posterior computationally efficient time ensuring resulting inference good theoretical properties quantified contraction rate rate shrink balls around true parameter value maintaining posterior mass formally semimetric space drift functions contraction rate sequence positive numbers posterior mass balls converges probability law drift general discussion contraction rates see instance ghosal ghosal van der vaart diffusions problem deriving optimal posterior convergence rates studied recently additional assumption drift integrates zero papaspiliopoulos mean zero gaussian process prior proposed together algorithm sample posterior precision operator inverse covariance operator proposed gaussian process given laplacian identity operator first consistency result shown pokern van waaij van zanten shown rate result improved upon slightly general class priors drift specifically paper authors consider prior defined cos sin standard fourier series basis functions sequence independent standard normally distributed random variables positive shown fixed assumed smooth optimal posterior rate contraction obtained note result nonadaptive regularity prior must match regularity obtaining optimal posterior contraction rates full range possible regularities drift two options investigated endowing either hyperprior second option results desired adaptivity possible regularities prior additional prior good asymptotic properties computational point view infinite series expansion inconvenient clearly implementation expansion needs truncated random truncation series expansion well known method defining priors bayesian nonparametrics see instance shen ghosal exactly idea exploited van der meulen prior defined law random function figure elements basis functions constitute basis see fig functions feature prominently construction brownian motion see instance bhattacharya waymire paragraph prior coefficients equipped gaussian distribution truncation level scaling factor equipped independent priors truncation absence scaling increases apparent smoothness prior illustrated deterministic truncation example van der vaart van zanten whereas scaling number decreases apparent smoothness scaling number increases apparent smoothness limited extent see example knapik simplest type prior obtained taking coefficients independent however also consider prior obtained first expanding periodic process basis followed random scaling truncation explain specific stationarity properties prior make natural choice draws posterior computed using reversible jump markov chain monte carlo mcmc algorithm van der meulen types priors fast computation facilitated leveraging inherent sparsity properties stemming compact support functions discussion van der meulen argued inclusion scaling random truncation prior beneficial however claim supported simulations results paper support claim theoretically proving adaptive contraction rates posterior distribution case prior used start general result van der meulen brownian semimartingale models adapt setting take account drift assumed information accumulates different way compared general ergodic diffusions subsequently verify resulting prior mass remaining mass entropy conditions appearing adapted result fied prior defined equation application results shows true drift function smooth appropriate choice variances well priors posterior drift contracts rate log around true drift log factor rate see instance kutoyants theorem moreover adaptive prior depend case true drift greater equal method guarantees contraction rates equal essentially corresponding application results shows obtain contraction rate paper organised follows next section give precise definition prior section general contraction result class diffusion processes considered derived main result posterior contraction presented section many results paper concern general properties prior application confined drift estimation diffusion processes illustrate show section results easily adapted nonparametric regression nonparametric density estimation proofs gathered section appendix contains couple technical results prior construction model posterior let space square integrable functions lemma sde unique weak solution proof section let denote law process generated replaced denotes law drift zero absolutely continuous respect density exp given prior path posterior given borel set assertions verified part proof theorem motivating choice prior interested randomly truncated scaled series priors simultaneously enable fast algorithm obtaining draws posterior enjoy good contraction rates explain mean first item consider first prior finite series prior let denote basis functions mean zero gaussian random vector precision matrix assume prior given conjugacy follows van der meulen lemma matrix referred grammian expressions follows computationally advantageous exploit compactly supported basis functions whenever nonoverlapping supports depending choice basis functions grammian specific sparsity structure set index pairs independently sparsity structure inherited long sparsity structure prior precision matrix matches next section make specific choice basis functions prior precision matrix definition prior define hat function basis functions given let figure plotted together define prior gaussian coefficients truncation level scaling factor equipped hyper priors extend periodically want consider function real line identify double index single index write let say belongs level thus belong level convenient notational purposes levels basis functions per level orthogonal essentially disjoint support define let cov define restriction denote assume multivariate normally distributed mean zero covariance matrix prior following hierarchy use denote joint distribution consider two choices priors sequence first choice consists taking independent gaussian random variables coefficients independent standard deviation random draws prior scaled piecewise linear interpolations dyadic grid brownian bridge plus random function choice motivated fact case var independent construct second type prior follows define cyclically stationary centred process periodic gaussian process covariance kernel cov process cyclically stationary covariance depends unique gaussian markovian prior continuous periodic paths property makes cyclically stationary prior appealing choice respects symmetries problem realisation continuous extended periodic function represented infinite series expansion basis finally scaling truncating obtain second choice prior drift function visualisations covariance kernels cov first prior brownian bridge type second prior periodic process prior parameter shown fig sparsity structure induced choice conditional posterior gaussian precision matrix grammian corresponding using basis functions including level coefficients independent trivial see precision matrix destroy sparsity structure defined convenient numerical computations next lemma details situation periodic processes lemma let defined equation figure heat maps cov case left brownian bridge plus random function right periodic process parameter chosen var sparsity structure precision matrix infinite stochastic vector appearing series representation equals sparsity structure defined entries covariance matrix random gaussian coefficients satisfy following bounds coth otherwise proof given section first part lemma also prior destroy sparsity structure second part asserts entries zero smaller order diagonal entries quantifying covariance matrix coefficients schauder expansion close diagonal matrix posterior contraction diffusion processes main result van der meulen gives sufficient conditions deriving posterior contraction rates brownian semimartingale models following theorem adaptation refinement theorem lemma van der meulen diffusions defined circle assume observations let prior henceforth may depend choose measurable subsets sieves define balls number set semimetric denoted defined minimal number radius needed cover set logarithm covering number referred entropy following theorem characterises rate posterior contraction diffusions circle terms properties prior theorem suppose sequence positive numbers bounded away zero assume constant every measurable set every constant big enough log every big enough equations referred entropy condition small ball condition remaining mass condition theorem respectively proof theorem section theorems posterior contraction rates main result section theorem characterises frequentist rate contraction posterior probability around fixed parameter unknown smoothness using truncated series prior section make following assumption true drift function assumption true drift expanded basis exists sup note use slightly different symbol norm denote remark assumption equivalent assuming smooth follows definition basis functions therefore follows equations combination equation nickl section equivalent smoothness coincide proposition nickl prior defined eqs make following assumptions assumption covariance matrix satisfies one following conditions fixed exists independent particular second assumption fulfilled prior defined assumption prior truncation level satisfies positive constants exp exp prior scaling assume existence constants exp prior defined logy poisson distributed equation satisfied whole range distributions including popular family inverse gamma distributions since inverse gamma prior decays polynomially lemma condition shen ghosal satisfied hence posterior contraction results applied prior obtain following result prior theorem assume satisfies assumption suppose prior satisfies assumptions let sequence positive numbers converges zero constant measurable set every positive constant sufficiently large log log log log log log following theorem obtained applying bounds theorem taking log theorem assume satisfies assumption suppose prior satisfies assumptions log means true parameter rate obtained timal possibly log factor particular space every small positive therefore converges rate essentially different function used defined compact interval basis elements defined forcing theorem derived results applications still holds provided fixed smoothness assumptions changed accordingly finite number basis elements added redefined long easy see results imply posterior convergences rates weaker rate stronger apply ideas knapik salomond obtain rates stronger theorem assume true drift satisfies assumption suppose prior satisfies assumptions let log rates similar rates obtained density estimation nickl however proof less involved note consistency applications nonparametric regression density estimation general results also apply models following results obtained satisfying assumption prior satisfying assumptions nonparametric regression model direct application properties prior shown previous section obtain following result nonparametric regression problem assume independent gaussian observation errors apply ghosal van der vaart example theorem obtain every log similar way theorem every log density estimation let consider independent observations unknown density relative lebesgue measure let denote space densities relative lebesgue measure natural distance densities hellinger distance defined define prior keeb endowed prior theorem periodic version assume log sense assumption applying ghosal theorem van der vaart van zanten lemma theorem obtain big enough constant log proofs proof lemma since conditions karatzas shreve theorem hold sde unique weak solution explosion time assume without loss generality define random times inf periodicity drift markov property random variables independent identically distributed note inf hence follows almost surely latter holds true since positive probability clear continuity diffusion paths proof lemma proof first part proof introduce notation write supp supp set indices become lattice partial order denote supremum identify similarly denote time points corresponding maxima without loss generality assume interiors supports disjoint case max supp min supp values found midpoint displacement technique coefficients given gaussian process vector gaussian say infinite precision matrix exists set conditional independent define set determine process times conditionally independent given markov property nonperiodic process result follows since lemma let evs proof without loss generality assume result follows scaling sides proof second part denote support respectively let let var var coth cov sinh note covariance matrix eigenvalues tanh coth strictly positive definite midpoint displacement evs assume without loss generality define halfwidth smaller interval consider three cases entries diagonal interiors supports support contained support case elementary computations assumption last display bounded hence case necessarily twofold application lemma sinh sinh using convexity sinh obtain bound note convex derive using bound fact coth easily seen plot case using obtain using calculation lemma noting obtain write simple computation shows derivative nonnegative hence increasing note maximising gives therefore follows terms derive following bounds write decreasing log convex positive log case bound value endpoints using obtain using bound exp obtain proof theorem general result deriving contraction rates brownian models proved van der meulen theorem follows upon verifying assumptions result diffusion circle assumptions easily seen boil every measures equivalent posterior defined equation well defined define random hellinger semimetric constants lim start verifying third condition recall local time process defined random process satisfies every measurable function integrals defined since working functions define periodic local time note continuous probability one hence support compact probability one since positive support follows sum definition finitely many nonzero terms therefore well defined function provided involved integrals exists follows schauer van zanten theorem converges positive deterministic function depending bounded away zero infinity since hellinger distance written follows third assumption satisfied conditions follow arguing precisely lemmas van waaij van zanten respectively key observation convergence result also holds nonzero assumed paper stated result follows theorem van der meulen taking paper proof theorem assumption proof proceeds verifying conditions theorem assumption true drift represented define truncated version small ball probability choose integer notational convenience write instead remainder proof lemma therefore implies let denotes probability density inf taken assumption sufficiently small second part assumption exp choice first part assumption exists positive constant exp exp log sufficiently small lower bounding middle term equation write implies max max gives bound choice standard normally distributed hence log log log log inequality follows lemma third term bounded hence log log log derive bounds first three terms right sufficiently small inequality implies logc log log bounding first term rhs sufficiently small log log log log log log log log positive constant bounding second term rhs sufficiently small logc logc final inequality immediate case else suffices verify exponent nonnegative assumption bounding third term rhs sufficiently small case case exponent positive assumption hence small enough log log get log inf log log conclude right hand side bounded exp log positive constant sufficiently small entropy remaining mass conditions denote linear space spanned define proposition log log proof follow van der meulen choose define let minimal respect let minimal respect hence exists maxk take arbitrary let bounded max max appropriate choice coefficients case obtain implies log log log asserted bound follows upon choosing proposition exists constant positive constant log log proof exists positive lemma set included set lemma set max hence set included set hence using lemma latter bounded result follows upon applying proposition finish proof entropy remaining mass conditions choose smallest integer constant set entropy bound follows directly proposition remaining mass condition using assumption obtain exp exp log note constant made arbitrarily big choosing big enough proof theorem assumption start lemma lemma assume exists independent let submatrix diagonal matrix proof following summation first inequality trivially hand first inequality used second part second inequality follows upon including diagonal bounded final inequality follows result follows combining derived inequalities continue proof theorem write block matrix defined accordingly lemma coth coth define tanh matrix easy see positive definite positive definite follows cholesky decomposition positive definite diag positive definite note sinh coth therefore consider lemma bound choosing definition small enough assumption therefore lemma positive definite diagonal matrix diagonal entries follows implies small ball probabilities mass outside sieve behave similar assumption independent normally distributed zero mean variance case corresponds assumption posterior contraction already established stated contraction rate assumption follows anderson lemma lemma proof theorem convergence stronger norms linear embedding operator injective continuous operator inverse easily seen densely defined closed unbounded linear operator following knapik salomond define modulus continuity sup theorem knapik salomond adapted case theorem knapik salomond let prior bnc measurable sets assume positive sequence note sieves define section property lemmas modulus continuity satisfies assume result follows lemmas used proofs lemma suppose expansion norm defined proof follows max lemma proof follows lemma let log proof note thus elementary bound hence gives log lemma anderson lemma define partial order space setting positive definite independently symmetric convex sets proof see anderson lemma let sup proof note inductively hence lemma let section sup proof let nonzero note constant hence may assume furthermore since norm also assume nonnegative let global maximum clearly since linear interpolation points may also assume form consider two cases case case hence cases thus uniformly nonzero lemma let positive numbers proof suppose lemma true positive hence terms negative particular means first term second term gives two inequalities hold simultaneously reached contradiction lemma let section sup proof let proof lemma may assume nonnegative hence sup sup sup note hence repeatedly applying lemma note linear interpolation points study affine functions positive maximum attained either without lose generality attained using scaling later stadium proof assume moment hence note consider let hence note constant function kchkp khkp let khkp hence one kcg maximum attained hence kcg result follows using acknowledgement work partly supported netherlands organisation scientific research nwo research programme foundations nonparametric bayes procedures erc advanced grant bayesian statistics infinite dimensions references anderson integral symmetric unimodal function symmetric convex set probability inequalities proc amer math bhattacharya waymire basic course probability theory universitext springer new york dalalyan sharp adaptive estimation drift function ergodic diffusions ann dalalyan kutoyants asymptotically efficient trend coefficient estimation ergodic diffusion math methods ghosal ghosh van der vaart convergence rates posterior distributions ann ghosal van der vaart convergence rates posterior distributions noniid observations ann nickl rates contraction posterior distributions ann nickl mathematical foundations statistical models cambridge series statistical probabilistic mathematics cambridge university press hindriks empirical dynamics neuronal rhythms phd thesis vrije universiteit amsterdam karatzas shreve brownian motion stochastic calculus volume graduate texts mathematics new york second edition knapik salomond general approach posterior contraction nonparametric inverse problems bernoulli knapik van der vaart van zanten bayesian inverse problems gaussian priors ann kutoyants statistical inference ergodic diffusion processes springer new york papaspiliopoulos pokern roberts stuart nonparametric estimation diffusions differential equations approach biometrika pokern fitting stochastic differential equations molecular dynamics data phd thesis university warwick pokern stuart van zanten posterior consistency via precision operators bayesian nonparametric drift estimation sdes stochastic processes applications schauer van zanten uniform central limit theorems additive functionals diffusions circle preparation shen ghosal adaptive bayesian procedures using random series priors scandinavian journal statistics spokoiny adaptive drift estimation nonparametric diffusion model ann strauch sharp adaptive drift estimation ergodic diffusions multivariate case stochastic process van der meulen schauer van zanten reversible jump mcmc nonparametric drift estimation diffusion processes comput statist data van der meulen van der vaart van zanten convergence rates posterior distributions brownian semimartingale models bernoulli van der vaart van zanten rates contraction posterior distributions based gaussian process priors ann van waaij van zanten gaussian process methods diffusions optimal rates adaptation electron van zanten nonparametric bayesian methods diffusion models mathematical biosciences | 10 |
work progress shie mannor vianney perchet gilles stoltz approachability unknown games online learning meets optimization shie mannor shie israel institute technology technion faculty electrical engineering haifa israel vianney perchet jun ensae paristech avenue pierre larousse malakoff france gilles stoltz stoltz greghec hec paris cnrs rue france abstract standard setting approachability two players target set players play repeatedly known game first player wants average payoff converge target set player tries exclude set revisit setting spirit online learning assume first player knows game structure receives arbitrary vectorvalued reward vector every round wishes approach smallest best possible set given observed average payoffs hindsight extension standard setting implications even original target set approachable obvious expansion approached instead show impossible general approach best target set hindsight propose achievable though ambitious alternative goals propose concrete strategy approach goals method require projection onto target set amounts switching scalar regret minimization algorithms performed episodes applications global cost minimization approachability sample path constraints considered keywords approachability online learning optimization introduction approachability theory blackwell arguably general approach available far online optimization received significant attention recently learning community see abernethy references therein standard setting approachability two players payoff function target set players play repeated game first player wants average payoff representing states different objectives converge target set representing admissible values said states opponent tries exclude target set prescribed priori game starts aim average reward asymptotically inside target set theory approachability unknown games arbitrary bandit problems analysis approachability limited date cases underlying structure problem known namely vector payoff function mannor perchet stoltz mannor perchet stoltz signalling structure obtained payoffs observed consider case unknown games rewards observed priori assumption obtained particular assume underlying game structure exploit model round every action decision maker reward assumed arbitrary minimization regret could extended setting see lugosi sections know minimization regret special case approachability hence motivation question theory approachability developed unknown games one might wonder possible treat unknown game known game large class actions use approachability lifting possible principle would lead unreasonable time memory complexity dimensionality problem explode unknown games decision maker try approach target set rather tries approach best smallest target set given observed rewards defining goal terms actual rewards standard online learning pursued exceptions listed multiobjective optimization community theory smallest approachable set insight even known games may happen target set given natural target set approachable typical relaxations consider uniform expansions natural target set convex hull better answer question another property regret minimization source inspiration definition strategy see lugosi performance asymptotically good best constant strategy strategy selects stage mixed action another way formulate claim strategy performs almost well best mixed action hindsight approachability scenario question translated existence strategy approaches smallest approachable set mixed action hindsight answer negative unfortunately next question define weaker aim would still ambitious typical relaxations considered short literature review approach generalizes several existing works proposed strategy used standard approachability cases desired target set approachable one wonders aim illustrate problems global costs introduced approachability sample path constraints described special case regret minimization mannor algorithm present require projection achilles heel many schemes similarly bernstein shimkin approach also strictly general ambitious one recently considered azar extensive comparison results bernstein shimkin azar offered section approachability unknown games outline article consists four parts equal lengths first define problem approachability unknown games link standard setting approachability section discuss reasonable target sets consider sections section shows means two examples expansion achieved convexification attained ambitious enough section introduces general class achievable ambitious enough targets sort convexification target set third part paper section exhibits concrete computationally efficient algorithms achieve goals discussed first part paper general strategy section amounts playing standard regret minimization blocks modifying direction needed performance merits studied detail respect literature mentioned bears resemblance approach developed abernethy last least fourth part paper revisits two important problems dedicated methods created dedicated articles written regret minimization global cost functions online learning sample path constraints section show general strategy stronger performance guarantees problems hoc strategies constructed literature setup unknown games notation aim setting one classical approachability vector payoffs considered difference lies aim classical approachability theory average obtained vector payoffs converge asymptotically target set known approachable based existence knowledge payoff function setting know whether approachable underlying payoff function ask convergence small possible setting unknown game vectors vector payoffs following game repeatedly played two players called respectively first player opponent second player vector payoffs considered first player finitely many actions whose set denote assume throughout paper avoid trivialities opponent chooses round vector vector payoffs impose restriction vectors lie convex bounded set first player picks round action possibly random according mixed action denote set mixed actions receives vector payoff also assume feedback gets see components one chose called bandit monitoring relaxed full monitoring explain remark assume first player knows bound maximal norm elements put differently scaling problem unknown mannor perchet stoltz terminology unknown game introduced machine learning literature see lugosi sections survey game unknown observe vector payoffs would received chosen different pure action bandit monitoring also even know underlying structure game structure exists section make latter point clear explaining classical setting approachability introduced blackwell particular case setting described payoff function exists therein knows strategy proposed blackwell crucially relies knowledge setting unknown even worse might even exist section section recall particular case approachability known minimization regret could dealt unknown games formulation approachability aim interested controlling average payoff ret wants approach small possible neighborhood given target set assume closed concept neighborhood could formulated terms general filtration see remark sake concreteness resort rather expansions base set denote formally denote closed sequel denotes distance set traditional literature approachability regret minimization consider smallest set would approachable hindsight averages vectors vector payoffs known advance whose components equal notion smallest set somewhat tricky first part article devoted discuss model consider following one fix target function takes argument section indicate reasonable choices associates aim ensure convergence ret definition classic approachability uniformity required respect strategies opponent construct strategies exists time strategies opponent probability least sup ret approachability unknown games remark general filtrations could considered expansions norm filtration mean instance one could considered shrinkages given compact set interior sake clarity simplicity restrict exposition concrete case expansions base set summary two sources unknowness become clearer concrete examples presented section structure game unknown might even exist first source unknownness also target unknown second source arises also known games following cases natural target target proven unachievable feasible target ambitious enough least approachable uniform expansion discussed section aim convex relaxations often considered manageable ambitious enough targets show improved upon general see paragraph discussion page details two sources unknownness concrete example global costs two classical relaxations mixed actions full monitoring present two extremely classical relaxations general setting described come cost simplify exposition general theory play mixed actions first martingale convergence results instance inequality controlling ret equivalent controlling averages conditionally expected payoffs indeed boundedness application said inequality ensure exists constant strategies opponent probability least wret given use inequalities replaced union bound entails choosing sufficiently large sup strategies opponent probability least sup wret therefore may focus instead ret sequel consider equivalently aim discussed mannor perchet stoltz enjoy full monitoring second assumption relaxed full monitoring least regularity assumptions uniform continuity target function indeed assumed gets observe choosing component however standard estimation techniques presented auer mertens sections provide accurate unbiased estimators whole vectors least case latter depends happened past opponent strategy choice action components estimators equal constraint mixed actions base decisions apply strategy eventually choose mixed action convex combination mixed action would freely chosen based weight uniform distribution weight indeed inequality averages vector payoffs vectors vector payoffs based respectively well corresponding average payoffs obtained differ something order probability least uniformly opponent strategies differences vanish rate order treatment similar one performed obtain also applied obtain statements uniformities respect time strategies opponent aim involves average payoffs via target function require uniform continuity technical reasons carry negligible differences average payoffs estimation approachability aim assumption uniform continuity easily dropped based result theorem details omitted conclusion approachability aim enjoying full monitoring construct strategy almost surely uniformly opponent strategies exists strategies opponent probability least sup however dependency still dealt cases see case regret minimization section section dependency present action comes additive term equal obtained payoff known approachability unknown games note often able provide stronger uniform deterministic controls form exists function strategies opponent conclude section point two relaxations considered come cost generality setting intended simplify clarify exposition full details standard reduction case bandit monitoring full monitoring omitted classical though lengthy technical expose link approachability known finite games link general setting classical setting considered blackwell therein opponent finite sets actions choose round respective pure actions possibly random according mixed actions payoff function given multilinearly extended according viewpoint game takes place opponent choosing round vector vector payoffs target set approached convergence ret hold uniformly opponent strategies course recalled equivalently require uniform convergence necessary sufficient condition closed convex exists course condition called dual condition approachability always met however view dual condition least approachable closed convex set given max min approaching corresponds considering constant target function better uniformly smaller choices target functions exist discussed section put correspondence therein called opportunistic mannor perchet stoltz knowledge crucial first strategy general strategies used approach approachable rely crucially knowledge indeed original strategy blackwell proceeds follows round first computes projection ret onto picks random according mixed action ret approachable mixed action always exists one take instance arg min max ret general strategy thus heavily depends knowledge approachable set target choice right still suitable approach indeed projection det ret onto ret det proportional ret thus arg min max ret arg min max ret det knowledge crucial second strategy strategies perform approachability known finite games though one described may popular one instance bernstein shimkin propose strategy based condition approachability still performs approachability optimal rate discuss greater details generalize case unknown games section describe shortly show heavily relies game known assume approachable round choose arbitrary mixed action draw choose arbitrary mixed action rounds assume mixed actions yet chosen addition pure actions actually played opponent corresponding mixed actions yes chosen well denoting strategy selects arg min max arg max min exists since well approachable thus crucial strategy knows however approachable essential case approachable approached instead suffices pick arg min yes yes suitable argument approachability unknown games link regret minimization unknown games problem regret minimization encompassed instance approachability sake completeness recall appendix knowledge payoff structure crucial specific problem course case general approachability problems two toy examples develop intuition examples presented serve guides determine suitable target functions target functions convergence guaranteed ambitious small enough sense made formal next section example minimize several costs time following example toy modeling case first player perform several tasks simultaneously incurs loss cost assume overall loss worst largest losses thus suffered simplicity enough purpose assume two actions opponent restricted pick convex combinations following vectors vector payoffs opponent actions thus indexed latter corresponds vector vectors base target set negative orthant supremum norm graphical representation expansions vectors provided figure example control absolute values example still two actions gets scalar rewards aim minimize absolute value average payoff control latter instance payoffs measure deviations either direction desired situation formally opponent chooses vectors assume actually lie product simply standard inner product consider base target set approached expansions min min mannor perchet stoltz last set equalities seen hold true contemplating figure figure graphical ofand different figurerepresentation graphical representation andexpansions left graph functions bold solid line dotted lines thin solid line smallest set hindsight achieved general denote function associates vector vector payoffs proof lemma assume contradiction combination achievableofinitsthe example index smallest cbycontaining convex consider strategy decision maker denote imagine first tim components nature chooses every stage othe vectors amounts playing min given average min equals image equals converge guaranteed average chosen converge infimum achieved byis continuity defines function large integer given exists asome possibly lemma examples convergence achieved strategies opponent consider second scenario first stages nature chooses vectors construction fixed ensured next stages proofs located appendix reveal difficulty assume nature chooses vectors denote average hold along whole path value change rapidly selected average played vectors corresponds whos average payoff vectors trt image equals therefore target set however definition formalize following proof scheme accommodate first situation lasts large number stages play given way opponent changes drastically situation repeated catch far target stage concave relaxation ambitious enough classical relaxation literature unachievable targets see mannor proceed consider concavifications convergence hold cav concavification latter defined least concave function next section show indeed always case illustrate examples goal ambitious enough proof lemma found appendix lemma examples mixed action play round ensure convergence target function uniformly smaller even strictly smaller points approachability unknown games general class ambitious enough target functions previous section showed examples target function ambitious goal concavification cav seemed ambitious enough section based intuition given formula concavification provide whole class achievable target functions relying parameter response function definition uniformity strategies opponent player mean uniform convergence stated right denote graph mapping definition continuous target function achievable decisionmaker strategy ensuring uniformly strategies opponent player generally possibly target function achievable approachable game payoff function uniformly strategies opponent player always entails without continuity condition however less restrictive general useful case target functions avoid lack convergence due errors early stages continuous target functions two definitions equivalent prove two facts section appendix defining equalities show function continuous even lipschitz function constant already showed section target function achievable general able compare target functions consider following definition notation definition target function strictly smaller another target function exists denote fact instance lemma target function cav always achievable show target function cav always achievable course section already showed cav ambitious enough examples exist achievable target functions cav however provide general study achievability cav sheds light achieve ambitious target functions ask convergence convex hull indeed convex hull exactly graph gcav cav mannor perchet stoltz concavification defined least concave function variational expression reads cav sup supremum finite convex decompositions elements belong factors nonnegative sum theorem fenchel bunt see theorem could actually impose general cav continuous however polytope lemma target function cav always achievable proof sketch known knows case compute cav graph gcav indicated definition suffices show convex set gcav approachable game payoffs play strategy approaching gcav note continuous thus closed set gcav closed convex set containing characterization approachability blackwell closed convex sets recalled already section states exist gcav definition even concludes proof proved lemma assumption knows restriction however ready consider indicated remark indeed needs know compute needed projections onto set implement blackwell approachability strategy approachability strategies may require knowledge generalized version one bernstein shimkin based dual condition approachability see section original version see section generalization anyway chose details least case convex lemma anyway follow lemmas theorem proved independently wherein knowledge even better prove strongest notion convergence definition irrespectively continuity lack continuity cav example ambitious target function rewrite cav sup indeed functions hand therein independent defined solutions optimization program depends specific approachability unknown games whenever convex function convex well see boyd vandenberghe example therefore denoting function defined sup cav two examples considered section actually show inequality strict points summarize facts lemma whose proof found appendix achievable special case lemma stated next subsection class generalizing form discussed lemma inequality cav always holds convex examples even cav general class achievable target functions class formulated generalizing definition call response function function replace specific response function response function definition target function based response function defined sup lemma response functions target functions achievable lemma actually follows theorem provides explicit efficient strategy achieve stronger sense irrespectively continuity lack continuity provide sketch proof additional assumption lipschitzness based calibration explains intuition behind also advocates functions reasonable targets resorting auxiliary calibrated strategy outputting accurate predictions sense calibration vectors almost amounts knowing advance knowledge get proof sketch lipschitz function show exists constant ensuring following given exists randomized strategy exists time strategies opponent probability least sup mannor perchet stoltz terms approachability theory see perchet survey means particular set thus set approachability approachability two equivalents notions fact sets hand closed convex sets approachable put differently achievable indeed fixing exists randomized strategy picking predictions among finitely many elements calibration score controlled exists time strategies opponent probability least sup foster vohra main strategy based auxiliary calibrated strategy play round average payoff thus decompose depending predictions made erage number times predicted average vectors vector payoffs obtained corresponding rounds equal otherwise take arbitrary value whenever particular using convex decomposition terms elements definition leads actually latter reference considers case calibrated predictions elements simplex clear method used mannor stoltz reduction problem approachability performed subsets compact sets desired uniformity opponent strategies see also mannor appendix result holds equivalence norms vector spaces finite dimension even original references considered approachability unknown games hence wrt denote max bound maximal element bounded set triangular equality shows max refers probability mass put indicated assume sketch proof lipschitz function lipschitz constant respect get max max max substituting proved max concludes proof thoughts optimality target functions previous subsections showed target functions form achievable unlike target function ambitious concavification cav question optimality raised question able answer general thoughts gathered appendix strategy regret minimization blocks section exhibit strategy achieve stronger notion convergence target functions advocated section irrespectively continuity lack continuity algorithm efficient long calls full discussion complexity issues provided application studied section mannor perchet stoltz description analysis strategy abernethy considered strategy see figure relies auxiliary strategy namely strategy following property assumption strategy sequentially outputs mixed actions ranges necessarily known advance necessarily known advance sequences vectors payoffs lying bounded interval possibly chosen online opponent player max note particular auxiliary strategy automatically adapts range payoffs number rounds sublinear guarantee adaptation needed unknown auxiliary strategies indeed exist instance polynomially weighted average forecaster lugosi ones possibly larger constant factor front term also exist instance exponentially weighted average strategies learning rates carefully tuned time described rooij sake elegance maybe cost providing intuitions led result provide figure version strategy need know time horizon advance used blocks increasing lengths simpler versions fixed block length would require tuning terms pick order optimize theoretical bound theorem response functions strategy figure strategies opponent exists ensuring wrt max max max maximal euclidean norm elements particular denoting constant strategies opponent max remark notation figure denoting addition largest integer mpart partial average vectors vector payoffs obtained last block arbitrary element otherwise take part part approachability unknown games parameters strategy initial action response function initialization play observe block blocks compute total discrepancy beginninga block till end block average vector vector payoffs obtained block run fresh instance rounds follows set play observe feed vector payoff components given denotes inner product obtain mixed action block starts round length thus lasts till round figure proposed strategy plays blocks increasing lengths important comments result strategy rely knowledge promised remark performance bound via max term also convexity required convergence rates independent ambient dimension concerning norms even strategy bound based euclidean norm set defined terms constant exists equivalence norms space finally note obtained uniformity requirement stated deterministic form function proof convergence follows bound via equivalence stated belongs latter set defined terms construction supremum thus suffices prove defined induction induction index blocks quantities control squared euclidean norms discrepancies end blocks recall mannor perchet stoltz denotes discrepancy end block difference two elements thus max use approach consider function defined analysis assume proved strategy sequences vectors vector payoffs possibly chosen opponent strategies opponent max instance define study guarantee max upper bound two squared norms respectively using inner product rewritten notation notation figure inequality indicates kmn max used induction hypothesis assumption therefore indicates quantity bounded max putting everything together proved induction holds provided defined instance max max lemma appendix taking max max thus get first max max approachability unknown games remains relate quantity hand separating time till end starting beginning block latter start strictly get mpart part part second sum contains elements regime incomplete triangular inequality thus shows krt max max max max used inequality ntp implication well sake readability bounds discussion section gather comments remarks pointers literature discuss particular links improvements concurrent independent works bernstein shimkin azar play blocks obtained rate optimal strategy proceeds blocks unlike ones exhibited case known games original strategy blackwell recent one bernstein shimkin see section strategy considered proof lemma also performed grouping according finitely many possible values predicted vectors vector payoffs target set approach unknown approaches sequence expansions set sizes expansions vary depending sequence realized averages vectors vector payoffs approachable target set given strategies blackwell bernstein shimkin need perform grouping actually easy prove following quantity involves grouping rounds minimized general krt mannor perchet stoltz indeed consider toy case scalar components negative orthant approached whose expansions given considering response function arg see boils controlling max contrast regret minimized severe issue really absolute value taken fact comparing payoff sum instantaneous maxima payoffs instead interesting maximum sums answer first question would yes play blocks given obtained rate optimal answer question positive considering toy case example bound given definition rewrites max max corresponds control called tracking regret shifts notion introduced helmbold see also lugosi chapter review results known tracking regret particular examples used therein show optimality bounds form one considered footnote adapted context thatpthe lower bound tracking regret shifts applies case order thus nutshell proved paragraphs ensure convergence controlling quantity form proceed blocks convergence hold faster rate however associated strategy computationally efficient also neither convexity continuity required yet stronger convergence achieved trading efficiency better rate interpretation different rates theorem shows set approachable namely set defined thus terminology spinat see also hou well remark blackwell therefore exists possibly computationally extremely inefficient strategy approaches indeed proof existence strategy rely constructive argument seen taking binary payoffs expectation regret larger positive constant realizations independent random variables identically distributed according symmetric bernoulli distribution particular regret larger constant sequence binary payoffs approachability unknown games based remarks may provide intuitive interpretation rate obtained theorem versus rate achieved either context abstract strategy mentioned right associated blackwell original strategy variations one bernstein shimkin classical case known games sets known approachable interpretation terms number significant costly computational units ncomp projections solutions convex linear programs etc performed strategies faster rate perform least one two units round strategy order times encompassed calls take place times cases rate proportional ncomp related framework azar setting considered therein exactly one described section works concurrent independent crucial differences lie however aims pursued nature results obtained quality strategy evaluated azar based lipschitz function notation theorem straightforward extension unknown horizon aim guarantee lim inf min max recall order azar mention convergence take place optimal rate satisfying recovering optimal rate actually direct consequence theorem assumptions indeed together lipschitz assumption entail lim inf implies image convex combination larger minimum images convex combinations thus yields particular lim inf min convergence rate thus order least defining response function arg max get however need underline aim extremely weak assume instance block nature chooses identical components min mannor perchet stoltz satisfied irrespectively algorithm contrary demanding aim consider necessarily satisfied appropriate used addition strategy designed azar still requires set vectors vector payoffs needs known severe restriction uses projections onto convex sets rate obtain weaker aim get improved aim links strategy bernstein shimkin final paragraph discussion theorem review strategy bernstein shimkin extend much extended setting close possible setting unknown games see figure extension however requires set possible vectors vector payoff known assumption would ready make parameters set response function initialization play arbitrary pick arbitrary rounds update discrepancy play mixed action arg min max arg max min compute figure generalization strategy bernstein shimkin theorem response functions strategy figure sequences vectors vector payoffs possibly chosen opponent player max obtained bound deterministic uniform strategies opponent bound theorem course control much weaker statement trying force convergence quantity towards set guarantee belongs seems difficult relate quantity set get convergence except special cases applications section underline limitation approachability unknown games one special cases set approachable null target function achievable assumption approachability translates general case existence response function advocated bernstein shimkin settings often computationally feasible access less costly performing projections onto nutshell strategy bernstein shimkin extended setting almost unknown games set needs known obtained convergence guarantees meaningful assumption approachability target set one two sources unknownness setting almost dealt fact underlying structure game unknown fact target unknown well proof theorem construction strategy hand proof performance bound also follow approach theorem however blocks needed proceed developing square euclidian norm relate one max show inner product immediate recurrence shows max concludes proof indeed von neumann minmax theorem using definitions min max min min particular min choosing entails used complete induction link classical approachability opportunistic approachability recall setting known finite games described section vectors vector payoffs actually correspond defines closed convex set set mannor perchet stoltz mixed actions opponent strategies considered therein relied response function defined arg min accessing value response function amounts solving convex program min done efficiently even reduces quadratic problem polytope algorithm based response function approaches set quantity defined required compute said quantity guarantee remark apply two strategies presented section blackwell strategy case strategy bernstein shimkin three algorithms ensure particular average payoffs asymptotically inside border set null positive indicates whether convex set approachable problem determining approachability set actually extremely difficult problem even determination approachability singleton set known games perform see mannor tsitsiklis see contradiction able approach able say note none algorithms discussed neither advance retrospect issue statement value happen perform approachability specific sequence actions chosen opponent determine minimal approachable set would suited sequences actions particular provide certificate whether given convex set approachable opportunistic approachability general known games one target function considered satisfies sequences vectors lead average payoff much closer uniform distance get pathwise refinement classical approachability put correspondence recent different notion opportunistic approachability see bernstein however quantifying exactly gain pathwise refinement would require much additional work maybe complete paper one mentioned explore issue applications section work two applications learning evaluated global cost functions approachability sample path constraints global cost functions problem introduced slightly generalized bernstein shimkin first extend setting unknown games describe approachability unknown games theorem guarantees case compare approach results ones two mentioned references keep original terminology global costs thus minimized switch global gains maximized substitution would straightforward description problem case unknown games denote kproj closed convex bounded set formed global cost function mapping kproj measuring quality vector kproj instance choice mixed action given vector vector payoffs evaluated performance average payoff equal regret controlled ensure latter quantity small well bernstein shimkin defined regret inf inf assuming continuous infimum defining equation achieved thus construct response function min actually proof techniques developed latter references see discussion ensure vanishing regret convexification vex concavification cav issue statements form lim sup vex cav additionally get convergence rates vex lipschitz function recall vex cav statements form much weaker original aim least convex concave natural case latter assumptions however satisfied including supremum norm upj max main contribution better notion regret directly bound whether convex similarly relax assumption concavity needed mentioned references tackle desired regret end propose notion regret better cases whether respectively convex concave precisely compare quantity based response function generalizes definition sup mannor perchet stoltz extended notion regret defined explain new definition always ambitious could guaranteed far literature namely indeed convex definition particular sup cav inequality stated strict instance indicated section convex global cost function indeed convex cav cav thus possibly cav stated lemma function also lipschitz function illustrates interest second part following corollary recall max denotes maximal euclidean norm elements corollary response functions continuous convex strategy figure ensures uniformly strategies opponent lim sup addition lipschitz function constant kproj precisely max proof apply theorem use notation function continuous thus uniformly continuous compact set kproj thus wrt entails convergences toward uniform strategies opponent definition convex combination elements form concludes first part corollary second part proved manner simply taking account bound fact lipschitz function discussion indicated general section offered two extensions setting global costs first explained deal unknown games second indicated aim given natural target necessarily approachable sharper targets ones traditionally considered reached second contribution perhaps important one indeed natural target corresponds ensuring following convergence set approachability unknown games target set necessarily closed convex approachable set convex hull proved bernstein shimkin convex hull exactly equal vex cav replace convergence convex hull convergence smaller set convergence ensured continuity set smaller follows discussion corollary use directly blackwell approachability strategy approach requires computation projections onto possibly computationally delicate task thus focus bernstein shimkin proceed explain obtained guarantee convergence easily improved strategy apply theorem lifted space payoffs namely associate defined component contains corresponding component well vector particular pick response function corresponding base response function defined convergence reads definition thus convex combination belongs convergence achieved additional regularity assumptions continuity vex cav stronger convergence holds seen adapting arguments used second part section however limitations approach bernstein shimkin twofold first already underline section sets equivalently need known strategy thus game fully unknown second control lie therefore reasonable hope refine convergence convergence set smaller defined terms approach mannor perchet stoltz approachability sample path constraints generalize setting regret minimization known finite games sample path constraints introduced mannor studied bernstein shimkin straightforward enough generalization twofold deal approachability rather regret consider unknown games description problem case unknown games vector kproj represents payoff also cost aim player control average payoff vector converge smallest expansion given closed convex target set abiding cost constraints ensuring average cost vector converges prescribed closed convex set formally two matrices respective sizes associate vector kproj payoff vector gma cost vector cma instance chooses mixed action vector vector payoffs gets instantaneous payoff suffers instantaneous cost admissible costs represented closed convex set closed convex payoff set approached question particular aim target unknown following general aim generalizing aims mannor bernstein shimkin assume wants following convergences take place uniformly strategies opponent grt crt target function defined small possible wants control average payoff grt well ensuring asymptotically average cost crt lies set admissible costs make problem meaningful original references assume cost constraint feasible assumption exists general result theorem states consider mostly following response function arg min provides response defining minimum indeed achieved continuity closed sets since addition convex defining equation convex optimization problem convex constraint solved efficiently course general preferably also response functions considered response function mean response function property indeed satisfied approachability unknown games adapt definition target function based response function consider payoffs sup discussion explain goals ambitious aims targeted original references essentially consisted shooting cav restricted cases ones corollary response functions strategy figure ensures strategies opponent grt max crt max respectively norm respectively seen linear function equipped respectively equipped particular aim achieved proof apply theorem use notation definition wcrt cct max assumed view form cct thus proved crt max similar argument based fact gct definition yields stated bound grt extension earlier results theorem yields indicated several times already mannor bernstein shimkin considered case regret minimization special case approachability linear form interval form bound values taken discuss special case strategies considered mannor efficient relied able project complicated sets resorted calibrated auxiliary strategies unlike one studied bernstein shimkin thus focus latter necessarily convex target set considered therein mannor perchet stoltz defined convex linear function convex see boyd vandenberghe example convex hull thus equals cav able compare merits strategy bernstein shimkin corollary first extend case unknown games based theorem end consider lifting apply similarly theorem get well response function using case definition convergence rewrites entails convergence particular crt additional regularity assumption continuity cav also get adapting arguments used second part section stronger convergence lim sup grt cav lim sup grt pcav summarizing convergence guaranteed cav inspection arguments shows cav actually uniformly continuous desired uniformity strategies opponent achieved limitations approach mentioned end previous section arise far concepts unknown game unknown target concerned first set needs known strategy game fully unknown second control lie therefore reasonable hope refine convergence cav convergence smaller target function contrast corollary provided refinement convexity smaller possibly strictly smaller cav adapt lemma prove strict inequality note known games however mannor section exhibit class cases cav optimal target function known games scalar payoffs scalar constraints set constraints form amounts minimizing constrained regret thus briefly indicate known games context defined mannor bernstein shimkin linear scalar payoff function approachability unknown games linear cost function given loss generality assume payoff function takes values bounded nonnegative interval set general formulation corresponds vectors describes matrices extract respectively first component first component regret considered payoff set approached given constraints expansions distance equals context convergences form thus read lim inf thus correspond constrained problems indeed denoting empirical frequency actions taken opponent recalling bounded instance max convergence finally reads lim inf showed section general target function achievable mannor section showed constrained regret respect defined minimized proposed relaxation consider convexification vex instead corresponds cav specific setting target function equals cav general theory provides improvement line optimality result cav exhibited mannor section case mannor perchet stoltz approachability approachable set minimal cost dual problem previous problem payoffs approach approachable convex set suffering costs trying control overall cost case set fixed terms set constraints actually problem symmetric previous one roles exchanged acknowledgments vianney perchet acknowledges funding anr grants shie mannor partially supported isf contract gilles stoltz would like thank investissements avenir labex ecodec financial support extended abstract article appeared proceedings annual conference learning theory colt jmlr workshop conference proceedings volume pages approachability unknown games references abernethy bartlett hazan blackwell approachability learning equivalent proceedings colt pages auer freund schapire nonstochastic multiarmed bandit problem siam journal computing azar feige feldman tennenholtz sequential decision making vector outcomes proceedings itcs bernstein shimkin approachability applications generalized problems journal machine learning research apr bernstein mannor shimkin opportunistic strategies generalized problems proceedings colt pages blackwell analog minimax theorem vector payoffs pacific journal mathematics boyd vandenberghe convex optimization cambridge university press cambridge lugosi algorithms prediction game theory machine learning lugosi prediction learning games cambridge university press mansour stoltz improved bounds prediction expert advice machine learning rooij van erven koolen follow leader hedge must journal machine learning research apr kleinberg mannor mansour online learning global cost functions proceedings colt foster vohra asymptotic calibration biometrika helmbold tracking best expert machine learning fundamentals convex analysis hou approachability game annals mathematical statistics mannor stoltz geometric proof calibration mathematics operations research mannor perchet stoltz mannor tsitsiklis approachability repeated games computational aspects stackelberg variant games economic behavior mannor tsitsiklis online learning sample path constraints journal machine learning research mannor perchet stoltz approachability online learning partial monitoring journal machine learning research oct mertens sorin zamir repeated games core discussion papers belgium perchet approachability regret calibration implications equivalences journal dynamics games spinat necessary sufficient condition approachability mathematics operations research approachability unknown games appendix link regret minimization unknown games problem regret minimization encompassed instance approachability recall knowledge payoff structure crucial specific problem course case general approachability problems indeed notation section aim regret minimization known finite game payoff function ensure lim sup max guaranteed approaching vector payoff function defined necessary sufficient condition approachability closed convex set satisfied condition rewrites case ret ret ret ret denote respectively vectors formed taking nonnegative parts original components vector interest using specific form see ret ret ret either components ret ret already choose mixed distribution defined ret latter case get ret particular satisfied ret knowledge crucial comments made specific choice independent payoff structure depends past payoff vectors particular strategy minimize regret generalized straightforward way case games full monitoring whose payoff structure unknown games round opponent chooses payoff vector mannor perchet stoltz chooses action observes entire vector wanting ensure regret vanishes lim sup max suffices replace occurrences particular payoff function defined replaced vectors vector payoffs whose components equal note bandit monitoring case unknown games case unknown game payoff structure unknown bandit monitoring available generic trick presented around adapted indicated footnote indeed feedback available end round estimation performed rather vectors gbt constraints define gbt gbt substituting estimates gbt strategy defined around lieu vectors ensures regret vanishes approachability unknown games appendix calculations associated examples proof lemma proof example assume contradiction convergence achieved consider strategy decision maker denote suffices consider almost sure convergence stronger uniformity requirements stated invoked statements sequel hold almost surely quantities like thought random variables imagine first time opponent chooses every stage vectors smallest supremum norms aim average payoffs converge guaranteed averages chosen mixed actions converge given exists possibly large integer consider second scenario first stages opponent chooses vectors construction strategy fixed ensured next stages assume opponent chooses vectors denote average first components mixed actions selected second set stages components therefore target set however definition therefore entails construction repeated stage choosing till stage reached stage exists assumption convergence achieved strategy one similarly see mannor perchet stoltz repeating one proves lim sup contradicts assumption ensures convergence claim follows proof example sketch construction previous example holds switching first regime chosen end average payoff close null another regime length starts matter get average payoff regime total end second regime target set given repeated proof lemma proof example cav prove fact first compute components equal therefore min max max note cav identically equal set defined convex hull smaller target functions convergence holds considered proves particular also guaranteed larger cav indeed max smaller cav even strictly smaller see figure addition convergence hold indeed plays round always picks first component average payoff equals definition distance last set equalities seen hold true contemplating figure approachability unknown games figure graphs functions bold solid line dotted line figure graphical representation different expansions left graphs functions bold solid dotted lines even theline supremum norm precisely therefore thin casesolid line proves particular convergence proof lemma assume contradiction achievable holds consider strategy decision maker denote imagine first time nature chooses every vectors amounts playing seemingly proof stage example computations involved simpler given average equals image equals aim example start computing refer vectors chosen converge opponent guaranteed chosen converges bytothe asonly theaverage mixed actions picked given exists possibly large integer value convex combination minimized achieved stages nature chooses first consider second scenario vectors next stages construction fixed ensured assume nature chooses vectors denote average selected average played corresponds whose min image equals therefore thew target set however definition concavification thus admits expression cav replace lengthy expression graphical illustrations proof provided figure consider target function defined denote playing round ensures thus example chosen freely mannor stoltz shows cav admittedly picture would help provide one figure cav cav left right figure concavification cav right representations left concavification cav figure figure representations ofrepresentations figure representations left alternative target function center concavification cav right improved indeed conclude discussion example construction showing thatlemma even proof previous sameexample construction previous proof lemma holds asbyforswitching example holds switching somehow unwas constructedtween choosing response function tween regimes mend isthe chosen andpayoff end average payoff regimes chosen andwhen average mind response function main algorithm shows chievable target function close null another regime oft length close null another regime length starts convergence ofholds also forstarts cav since latter larger reply awhat localthe average vectors payoffs reasonable quantities matter whatwill theget getregime average payoff regime total matter vector average payoff total strictly points indeed follows equals hould targeted rtquite surprisingly relaxing rtbelow image equals fact whilelarger theillustrate target image inequality byis expectations cav target lso results better payoffs instance decision maker knows advance repeated repeated thus next vectors worry getting max laying could well satisfied thus play proof second consider achievthe second part lemma lemma achievturns theproof argument performed consider part inequality strict seen illustration target function indeed suffices play able target function indeed able suffices cav play atiseach round inequality round inequality follows fact max follows cav cav fact max roof second lemma thefigure function already considered thiswe seen conclusion inequality strict seen inequality strict conclusion cav cav uch seen picture illustrated figures illustrated figures could signvalues absolute values proof illustrates whatthe weproof call illustrates sign thecall absolute proof lemma convex much smallerofthan convex combinations convex combinations considered combinations much considered smaller convex combinations indeed absolute values elements expression cav absolute values elements considered expression considered cav indeed sign proof example proofofinbelow illustrates wecanthe could call seenfor example given seen example form given allsets form sets sup vii wii cav sup xcav cav sup proof second part lemma according cand given form sets proof second part lemma according given form thetosets target function defined astarget function defined sup sup vii wiin vii wii tedious case study consisting identifying worst convex decompositions one gets explicit expression approachability unknown games compared expression obtained earlier cav namely cav admittedly picture would help provide one figure see picture cav figure representations figure left concavification cav figure representations representations left center representation right cav proof lemma construction previous example holds switching bex cav even cav considering direct calculations tween regimes chosen end average payoff respective values null another regime length starts matter get average payoff regime total target image equals proof example prove thatofthe result follow already repeated inequality cav proved section indeed seen computations leading expression proof second part lemma consider able target function indeed suffices play round inequality cav follows max inequality strict seen conclusion cav therefore illustrated proof illustrates could call sign absolute values wecombinations inequality convex considered much smaller convex combinations absolute values elements considered expression cav indeed given form seen example entails substicav supagain vxi tuting using supremum distance negative orthant increasing respect inequalities proof second part lemma according given form sets max target function sup mannor perchet stoltz get sup sup converse inequality follows decomposition convex combination weight weight particular indicated approachability unknown games appendix technical proofs proof two facts related definition comments definition mentioned two facts prove first condition less restrictive general second continuous target functions two definitions coincide entails condition less restrictive general target functions need considered end consider toy case reduced one element decision make play action opponent player chooses elements precisely target set equals target function defined since identify consider sequence since contrary therefore converge sequence converges actually proof entails continuity assumption consider continuous function show entails suffices show exists function required uniformities respect strategies opponent carried end continuity exploited following two properties first closed second since bounded actually uniformly continuous denote modulus continuity function satisfies denote projection onto closed set definition also define element follows let otherwise exists element krg recall closed set expansions closed expansions denote vector mannor perchet stoltz construction kdkp introduce new point provide rewriting two equalities yield krg kdkp summarizing cases whether belongs krg since get triangle inequality krg last inequality follows uniform continuity last term side display bounded krg kmg last inequality used fact putting pieces together proved lemma used proof theorem lemma consider two positive numbers form positive sequence defined max proof proceed induction note relation satisfied construction assuming holds show also true denoting max get suffices show latter upper bound smaller follows indeed first inequality comes bounding expanding term second inequality holds definition approachability unknown games appendix thoughts optimality target functions first define notion optimality based classical theory mathematical orderings see definition seen strict partial order associated partial order denoted corresponding standard pointwise inequality functions existence admissible target functions definition target function admissible achievable exists achievable target function might exist several even infinite number admissible target functions show example exists always least one admissible function show way unfortunately unable exhibit general concrete admissible target functions lemma unknown game exists least one admissible mapping proof proof based application zorn lemma prove set achievable target functions partially ordered property every totally ordered subset lower bound case zorn lemma ensures set contains least one minimal element element element satisfies given totally ordered subset define target function inf course smaller element point show still achievable property use repeatedly two target functions definition fact achievable means compact sets approachable game payoffs particular non empty compact set empty indeed fixing would subsets cover compact topological space subsets open sets topological space finitely many would needed covering call since totally ordered one sets minimal inclusion therefore one sets maximal inclusion say one corresponding therefore would totally ordered would either would lead former case latter case cases contradiction mannor perchet stoltz addition prove exists included open denote indeed denote compact sets therefore argument see must exist exactly wanted prove summarizing proved non empty expansion approachable contains approachable set proof lemma means set thus approachable set put differently achievable illustration examples response function choose practice target functions always admissible convenient natural choice practice example shows unfortunately always admissible example shows many different target functions may admissible thus difficult issue general theory choose even optimality class target functions example unfortunately admissible indeed seen carefully comparing expressions hand achievable suffices play round actually form constant response function example target functions associated admissible illustrate general existence result lemma showing example target functions associated constant response functions admissible corresponds case chooses mixed action rounds particular proof lemma indicates latter thus admissible unlike example expressions target functions needed plays vector vector payoffs gets average payoff denote equals underlying response function constant convex decomposition needs considered defining supremum latter equals max approachability unknown games since decreasing increasing functions take value get proof follows methodology used prove lemma fix strategy achieving target function fixed show necessarily provide detailed proof equality lies interval proof adapted straightforward manner prove equality well intervals proof lemma suffices consider almost sure statement convergence uniformity respect strategies opponent needed statements hold almost surely times thought random variables argument based three sequences mixed actions first one assume opponent chooses corresponding stages made arbitrarily large denote average mixed actions played rounds average payoff vector received equals whose distance negative orthant since strategy achieves assumption holds lim sup sake compactness denote fact entails lim inf fact denote next stages assume opponent chooses corresponds denote average mixed actions played rounds average payoff vectors received rounds one hand rounds hand therefore respectively equal distance latter negative orthant given max know asymptotically smaller achievability assumption thus obtained following system equations mannor perchet stoltz sum last two inequalities together first inequality leads substituting second inequality get symbol denotes convergence summing proved limits yields thus latter limit finally get consider show end assume stages opponent switches instead rounds note case average values coefficients used first rounds proportional played perform first auxiliary calculations multiplying equalities see total number rounds equals particular finally denoting average mixed action played rounds average vector payoffs rounds rounds respectively equal overall average payoff given distance vector supremum norm negative orthant must smaller limit achievability however said distance negative orthant bound larger second component equals substituted limit thus proved claimed | 10 |
journal machine learning research manuscript review submitted published expected policy gradients reinforcement learning kamil ciosek shimon whiteson jan department computer science university oxford wolfson building parks road oxford editor david blei bernhard abstract propose expected policy gradients epg unify stochastic policy gradients spg deterministic policy gradients dpg reinforcement learning inspired expected sarsa epg integrates sums across actions estimating gradient instead relying action sampled trajectory continuous action spaces first derive practical result gaussian policies quadric critics extend analytical method universal case covering broad class actors critics including gaussian exponential families reparameterised policies bounded support gaussian policies show optimal explore using covariance proportional scaled hessian critic respect actions epg also provides general framework reasoning policy gradient methods use establish new general policy gradient theorem stochastic deterministic policy gradient theorems special cases furthermore prove epg reduces variance gradient estimates without requiring deterministic policies little computational overhead finally show epg outperforms existing approaches six challenging domains involving simulated control physical systems keywords policy gradients exploration bounded actions reinforcement learning markov decision process mdp introduction reinforcement learning agent aims learn optimal behaviour policy trajectories sampled environment settings feasible explicitly represent policy policy gradient methods sutton peters schaal silver optimise policies gradient ascent enjoyed great success especially large continuous action spaces archetypal algorithm optimises actor policy following policy gradient estimated using critic value function policy stochastic deterministic yielding stochastic policy gradients spg sutton deterministic policy gradients dpg silver theory underpinning methods quite fragmented approach separate policy gradient theorem guaranteeing policy gradient unbiased certain conditions furthermore approaches significant shortcomings spg variance gradient estimates means many trajectories usually needed learning since gathering trajectories typically expensive great need sample efficient methods kamil ciosek shimon whiteson license see https attribution requirements provided http ciosek whiteson dpg use deterministic policies mitigates problem variance gradient raises difficulties theoretical support dpg limited since assumes critic approximates practice approximates instead addition dpg learns undesirable want learning take cost exploration account importantly learning necessitates designing suitable exploration policy difficult practice fact efficient exploration dpg open problem applications simply use independent gaussian noise heuristic uhlenbeck ornstein lillicrap article extends previous work ciosek whiteson proposes new approach called expected policy gradients epg unifies policy gradients way yields theoretical practical insights inspired expected sarsa sutton barto van seijen main idea integrate across action selected stochastic policy estimating gradient instead relying action selected sampled trajectory contributions paper threefold first epg enables two general theoretical contributions section new general policy gradient theorem stochastic deterministic policy gradient theorems special cases proof section epg reduces variance gradient estimates without requiring deterministic policies gaussian case computational overhead spg second define practical policy gradient methods gaussian case section epg solution analytically tractable also leads principled exploration strategy section continuous problems exploration covariance proportional scaled hessian critic respect actions present empirical results section confirming new approach exploration substantially outperforms dpg exploration six challenging mujoco domains third provide way deriving tractable epg methods general case policies coming certain exponential family section critics reparameterised polynomials thus yielding analytic epg solutions tractable broad class problems essentially making epg universal method finally section relate epg approaches background markov decision process puterman tuple set states set actions practice either finite reward function transition kernel initial state distribution discount factor policy distribution actions given state denote trajectories sample reward policy induces markov process transition kernel use symbol denote lebesgue integration measure fixed assume induced markov process ergodic single invariant measure defined whole state space value function actions sampled show article certain settings dpg equivalent epg method expected policy gradients advantage function ris optimal policy maximises total return since consider learning one current policy drop redundant parameterised stochastic policy gradients spg sutton peters schaal perform gradient ascent gradient respect gradients without subscript always respect stochastic policies log occupancy measure defined appendix baseline function depends state action since log typically ergodicity lemma see appendix approximate samples trajectory length log critic discussed policy deterministic denote use deterministic policy gradients silver instead update approximated using samples since policy deterministic problem exploration addressed using external source noise typically modelled using process uhlenbeck ornstein lillicrap parameterised critic approximates learned sarsa rummery niranjan sutton alternatively use expected sarsa sutton barto van seijen marginalises distribution specified known policy reduce variance update could also use advantage learning baird lstdq lagoudakis parr critic function approximator compatible actor converges sutton ciosek whiteson instead learning set use error estimate bhatnagar log approximate value function learned using policy evaluation algorithm works error unbiased estimate advantage function benefit approach sometimes easier approximate return error unprojected distorted function approximation however error noisy introducing variance gradient cope variance reduce learning rate variance gradient would otherwise explode using adam kingma natural policy gradients kakade amari peters schaal newton method furmston barber however results slow learning variance high see section discussion variance reduction techniques expected policy gradients section propose expected policy gradients epg first introduce denote inner integral log log suggests new way write approximate using lemma see appendix log approach makes explicit one step estimating gradient evaluate integral included term main insight behind epg given state expressed fully terms known quantities hence manipulate analytically obtain formula compute integral using numerical quadrature analytical solution impossible section show rare discrete action space becomes sum actions idea behind epg also independently concurrently developed mean actor critic asadi though discrete actions without supporting theoretical analysis expected policy gradients spg given performs quadrature using simple monte carlo method follows using action log log moreover spg assumes action used estimation action executed environment however relying method unnecessary fact actions used interact environment need used evaluation since bound variable definition motivation thus similar expected sarsa applied actor gradient estimate instead critic update rule epg shown algorithm uses form policy gradient algorithm repeatedly estimates integration subroutine algorithm expected policy gradients initialise optimiser initialise policy parameterised converged estimated policy gradient per end one motivations dpg precisely simple quadrature implicitly used spg often yields high variance gradient estimates even good baseline see consider figure left simple monte carlo method evaluates integral sampling one times blue evaluating log red function baseline decrease variance adding multiple log red curve problem remains red curve high values blue curve almost zero consequently substantial variance persists whatever baseline even simple linear shown figure right dpg addressed problem deterministic policies epg extends stochastic ones show section analytical epg solution thus corresponding reduction variance possible wide array critics also discuss rare case numerical quadrature necessary section provide general results apply epg setting general policy gradient theorem begin stating general result showing epg seen generalisation spg dpg first state new general policy gradient theorem ciosek whiteson spg update variance policy pdf action baseline figure left gaussian policy mean given state constant blue log red right variance simple monte carlo estimator function baseline simple monte carlo method variance would number samples theorem general policy gradient theorem normalised lebesgue measure proof begin expanding following expression first equality follows expanding definition penultimate one follows lemma appendix theorem follows rearranging terms crucial benefit theorem works policies stochastic deterministic unifying previously separate derivations two settings show following two corollaries use theorem recover stochastic policy gradient theorem sutton deterministic policy gradient theorem silver case introducing additional assumptions obtain formula expressible terms known quantities corollary stochastic policy gradient theorem differentiable log expected policy gradients proof obtain following expanding obtain log plugging definition obtain invoking theorem plugging expression recover dpg update introduced corollary deterministic policy gradient theorem measure deterministic policy differentiable overload notation slightly denote action taken state corresponding measure proof begin expanding term useful later results applying multivariate chain depend policy parameters hence dependency appears twice proceed obtain expression second equality follows observing policy third one follows using obtain invoking theorem plugging expression corollaries show choice deterministic stochastic policy gradients fundamentally choice quadrature method hence empirical success dpg relative spg silver lillicrap understood new light particular attributed fundamental limitation stochastic policies indeed stochastic policies sometimes preferred instead superior quadrature dpg integrates measures known easy spg typically relies simple monte carlo integration thanks epg deterministic approach longer required obtain method low variance variance analysis prove policy epg estimator lower variance spg estimator ciosek whiteson lemma random variable log nonzero variance log proof random variables mean need show log start applying lemma lefthand side setting log shows log total return mrp likewise applying lemma instantiating deterministic random variable total return mrp note therefore furthermore assumption lemma inequality strict lemma follows applying observation convenience lemma also assumes infinite length trajectories however practical limitation since policy gradient methods implicitly assume trajectories long enough modelled infinite furthermore finite trajectory variant also holds though proof messier lemma assumption reasonable since way random variable log could zero variance actions policy support except sets measure zero case optimising policy would unnecessary since know estimators unbiased estimator lower variance lower mse moreover observe lemma holds case computation exact section shows often possible expected policy gradients gaussian policies epg particularly useful make common assumption gaussian policy perform integration analytically reasonable conditions show expected policy gradients algorithm gaussian policy gradients initialise optimiser converged policy parameters updated using gradient computed scratch end algorithm gaussian integrals function use lemma return end function function return ech end function use lemma see corollary update policy mean computed epg equivalent dpg update moreover derive simple formula covariance see lemma algorithms show resulting special case epg call gaussian policy gradients gpg surprisingly gpg nonetheless fully equivalent dpg method particular form exploration hence gpg specifying policy covariance seen derivation exploration strategy dpg way gpg addresses important open question show section leads improved performance practice computational cost gpg small must store hessian matrix size typically small one mujoco tasks use experiments section hessian size policy covariance matrix policy gradient must store anyway confused hessian respect parameters neural network used newton natural gradient methods peters schaal furmston easily thousands entries hence gpg obtains epg variance reduction essentially free ciosek whiteson analytical quadrature gaussian policies derive lemma supporting gpg lemma gaussian policy gradients policy gaussian parameterised critic form const mean covariance components given proof ease presentation prove lemma action space standard deviation drop suffix subsequent formulae first note constant term critic influence value since depends state action treated baseline observe log log log consider linear term quadric term separately linear term log quadric term log summing two terms yields expected policy gradients calculate integrals standard deviation beginning linear term log quadric term log summing two terms yields multivariate case action space obtained using method developed section later paper observing multivariate normal distribution parametric family given sufficient statistic vector containing vector vectorised matrix polynomial hence lemma applicable lemma requires critic quadric actions assumption restrictive since coefficients arbitrary continuous functions state neural network exploration using hessian equation suggests include covariance actor network learn along mean using update rule however another option compute covariance scratch iteration analytically computing result applying infinitely many times following lemma lemma exploration limit iterative procedure defined applied times using diminishing learning rate converges ciosek whiteson eigenvalue increases sharp maximum little exploration sharp minimum lots exploration moderate exploration figure parabolas show different possible curvatures critic set exploration strongest sharp mimima left side figure exploration strength decreases move towards right almost exploration far right sharp maximum proof consider sequence diagonalise hessian orthonormal matrix obtain following expression element sequence since eigenvalue hessian obtain identity lim practical implication lemma policy gradient method use gaussian exploration covariance proportional ech reward scaling constant thus exploring scaled covariance ech obtain principled alternative heuristic results show also performs much better practice lemma intuitive interpretation large positive eigenvalue sharp minimum along corresponding eigenvector corresponding eigenvalue also large easiest see action space hessian eigenvalue scalar exploration mechanism case illustrated figure idea simple larger eigenvalue worse minimum exploration need leave hand negative maximum small since exploration needed case critic saddle points shown figure case shown figure explore little along blue eigenvector since intersection blue plane shows maximum much along red lemma relies crucially use step sizes diminishing length trajectory rather finite step sizes therefore step sequence serves useful intermediate stage simply taking one step using finite step sizes would mean covariance would converge either zero diverge infinity expected policy gradients figure action spaces critic saddle points case define exploration along eigenvector separately eigenvector since intersection red plane shows minimum want escape essence apply reasoning shown figure plane separately planes spanned corresponding eigenvector way escape saddle points action clipping describe gpg works environments action space bounded setting occurs frequently practice since systems often physical constraints bound fast robot arm accelerate typical solution problem simply start policy unbounded support action taken clip desired range follows equivalent max min justification process simply treat clipping operation max min part environment specification formally means transform original mdp defined another mdp defined max min since unbounded action space use machinery unbounded actions solve since mdp guaranteed optimal deterministic policy transformed policy call deterministic solution form max min practice mdp never constructed described process equivalent using algorithm meant action generated simply clipping algorithm course optimisation still local guarantee finding global merely increase chances assume without loss generality support interval ciosek whiteson algorithm policy gradients clipped actions initialise optimiser initialise policy parameterised converged clipping function max min update using action end figure vanishing gradients using hard clipping agent determine whether small large alone necessary sample interval order obtain meaningful policy update unlikely current policy shown red curve however algorithm introduce new bias sense reward obtained lead problems slow convergence policy gradient settings see consider figure hard clipping agent distinguish since squashing reduces value hence corresponding values identical based trajectories using way knowing mean policy adjusted order get useful gradient chosen falls interval since samples gaussian infinite support eventually happen nonzero gradient obtained however interval falls distant part tail convergence slow however problem mitigated gpg see consider figure policy shifts flat area critic becomes constant constant critic zero hessian generating boost exploration increasing standard deviation policy making much likely point sampled useful gradient obtained expected policy gradients figure gpg avoids vanishing gradient problem policy denoted red enters flat area entering flat area exploration immediately increases new distribution blue another way mitigating hard clipping problem use differentiable squashing function describe section quadric critics approximations gaussian policy gradients require quadric critic given state assumption different assuming quadric dependency state typically sufficient two reasons first linear quadratic regulators lqr feedback class problems widely studied classical control theory known quadric action vector given state crassidis junkins equation second often assumed todorov quadric critic quadric approximation general critic enough capture enough local structure preform policy optimisation step much way newton method deterministic unconstrained optimisation locally approximates function quadric used optimise function across several iterations corollary describe approximation method applied gpg approximate quadric function neighbourhood policy mean corollary approximate gaussian policy gradients arbitrary critic policy gaussian parameterised lemma critic doubly differentiable respect actions state hessian respect evaluated fixed proof begin approximating critic given using first two terms taylor expansion const indeed hessian discussed section considered type reward model ciosek whiteson series truncation function righthand side quadric use lemma actually obtain hessian could use automatic differentiation compute analytically sometimes may example relu units used hessian always zero cases approximate hessian generating number random around computing values locally fitting quadric akin methods control roth universal expected policy gradients covered common case continuous gaussian policies extend analysis policy classes provide two cases results following sections exponential family policies multivariate polynomial critics arbitrary order arbitrary policies possessing mean linear critics main claim analytic solution epg integral possible almost system hence describe epg universal exponential family policies polynomial critics describe general technique obtain analytic epg updates case policy belongs certain exponential family critic arbitrary polynomial result significant since polynomials approximate continuous function bounded interval arbitrary accuracy weierstrass stone since result holds nontrivial class distributions exponential family implies analytic solutions epg almost always obtained practice hence monte carlo sampling estimate inner integral typical spg rarely necessary lemma epg exponential families polynomial sufficient statistics consider class policies parameterised entry vector possibly multivariate polynomial entries vector moreover assume critic possibly multivariate polynomial course method truly universal completely arbitrary problem claim epg universal class systems arising lemmas section however class broad feel term universal justified similar claim neural networks based sigmoid nonlinearities universal even though approximate continuous functions opposed completely arbitrary ones expected policy gradients entries policy gradient update closed form expression terms uncentered moments vector containing coefficients polynomial vector containing coefficients polynomial multiplication vector uncentered moments order matching polynomials proof first rewrite inner integral expectation log log since polynomials multiplication polynomials still polynomial expectations expectations polynomials compute second expectation exploit fact since polynomial sum monomial terms right terms uncentered moments arrange coefficients vector vector obtain right term apply reasoning product obtain left term obtained moment generating function mgf indeed distribution form mgf guaranteed exist closed form bickel doksum hence computation moments reduces computation derivatives see details appendix note assumption polynomial respect action dependence state appears arbitrary neural network course polynomials universal approximators may efficient stable ones importance lemma currently mainly epg possible universal class approximators polynomials shows epg ciosek whiteson analytically tractable principle continuous open research question whether suitable universal approximators admitting analytic epg solutions identified reparameterised exponential families reparameterised critics lemma assumed function called sufficient statistic exponential family polynomial relax assumption approach start policy polynomial sufficient statistic introduce suitable reparameterisation function policy defined equivalent random variable representing action squashing assuming exists jacobian almost everywhere policy written det det following lemma develops epg method policies lemma consider invertible differentiable function define policy assume jacobian nonsingular except set zero consider critic denote reparameterised critic policy gradient update given formula proof log log det log log log det second equality perform variable substitution third equality use fact fourth equality use fact log det since parameterised universality polynomials holds bounded intervals weierstrass support policy may unbounded address unbounded approximation case saying practice critic learned samples thus typically accurate bounded interval anyway abuse notation slightly using probability distribution pdf expected policy gradients ready state universality result idea obtain reparameterised version epg lemma reparameterising critic policy using transformation following corollary general constructive result article corollary epg exponential families reparameterisation consider class policies parameterised defined consider reparameterisation function define every assume following invertible jacobian exists nonsingluar except set zero reparameterised policy polynomial lemma policy gradient update obtained follows proof apply lemmas lemma also practical application case want deal bounded action spaces discussed section hard clipping cause problem vanishing gradients default solution use gpg case use gpg instance dimensionality action space large computing covariance policy costly alleviate vanishing gradients problem using strictly monotonic squashing function one implication lemma set gaussian invoke lemma obtain exact analytic updates useful policy classes obtained setting sigmoid exponential function respectively long choose critic quadric quadric reparameterised version epg algorithm except uses squashing function instead clipping function aribtrary policies linear critics next consider case stochastic policy almost completely arbitrary possess mean need even already general exponential family policies used lemma corollary critic constrained linear actions following lemma slight modification observation made connection algorithm lemma epg arbitrary stochastic policies linear critics consider arbitrary nondegenerate probability distribution mean assume critic form coefficient vector policy gradient update given denotes integral mean ciosek whiteson proof ada ada since dpg already provides result policies see corollary conclude using linear critics means analytic solution reasonable policy class see lemma useful first consider systems arise discretisation continuous time systems time scale assume true smooth actions magnitude allowed action goes zero time step decreases linear critic sufficient approximation approximate smooth function linear function sufficiently small neighbourhood given point choose time step small enough action leave neighbourhood use lemma perform policy gradients else fails epg numerical quadrature despite broad framework shown article analytical solution impossible still perform integration numerically epg still beneficial cases action space low dimensional numerical quadrature cheap high dimensional still often worthwhile balance expense simulating system cost quadrature actually even extreme case expensive quadrature cheap simulation limited resources available quadrature could still better spent epg smart quadrature spg simple monte carlo crucial insight behind numerical epg integral given log depends two fully known quantities current policy current approximate critic therefore use standard numerical integration method compute actions integrand evaluated also use method quadrature abscissae designed course update derived lemma provides direction change policy mean means exploration performed using mechanism linear critic contain enough information determine exploration expected policy gradients experiments epg many potential uses focus empirically evaluating one particular application exploration driven hessian exponential introduced algorithm lemma replacing standard exploration continuous action domains end apply epg five domains modelled mujoco physics simulator todorov compare performance dpg spg experiments described extend previous conference work ciosek whiteson two ways added domain used detailed comparison ppo algorithm schulman practice epg differs deep dpg lillicrap silver exploration strategy though theoretical underpinnings also different hyperparameters dpg epg related exploration taken existing benchmark islam brockman exploration hyperparameters epg exploration covariance ech values obtained using grid search set domain since constant scaling rewards reasonable set whenever reward scaling already used hence exploration strategy one hyperparameter opposed specifying pair parameters standard deviation mean reversion constant used learning parameters domains used exploration constant diagonal covariance actor update approximately corresponds average variance process time parameters spg rest algorithm learning curves obtained confidence intervals show results independent evaluation runs used actions generated policy mean without exploration noise hessian gpg obtained using method follows step agent samples action values quadric fit norm since problem accomplished solving linear system hessian computation could greatly sped using approximate method even skipped completely used quadric critic however optimise part algorithm since core message gpg hessian useful compute efficiently results figure show epg exploration strategy yields much better performance dpg furthermore spg poorly solving easiest domain reasonably quickly achieving slow progress failing entirely domains surprising since dpg introduced precisely solve problem high variance spg estimates type task spg initially learns quickly outperforming methods noisy gradient updates provide crude indirect form exploration happens suit problem clearly inadequate complex domains even simple domain leads subpar performance late learning tried learning covariance spg covariance estimate unstable regularisation hyperparameters tested matched spg performance even simplest domain ciosek whiteson epg runs dpg runs spg runs epg runs dpg runs spg runs epg runs dpg runs spg runs epg runs dpg runs spg runs epg runs dpg runs spg runs figure learning curves mean interval returns clipped number independent training runs parentheses horizontal axis scaled thousands steps addition epg typically learns consistently dpg three tasks empirical standard deviation across runs epg substantially lower dpg end learning shown table two domains confidence intervals around empirical standard deviations dpg epg wide draw conclusions surprisingly dpg learning curve declines late learning reason seen individual runs shown figure dpg spg suffer severe unlearning unlearning explained exploration noise since expected policy gradients domain table estimated standard deviation mean interval across runs learning figure three runs epg left dpg middle spg right domain demonstrating epg shows much less unlearning evaluation runs use mean action without exploring instead exploration dpg may coarse causing optimiser exit good optima spg unlearns due noise gradients noise also helps speed initial learning described transfer domains epg avoids problem automatically reducing noise finds good optimum hessian large negative eigenvalues described section fact epg stable way raises question whether instability algorithm inverted oscillating learning curve caused primarily inefficient exploration excessivly large differences subsequent policies address compare results proximal policy pptimisation ppo schulman policy gradient algorithm designed specifically include term penalising difference successive policies comparing epg result figure ppo schulman figure first row third plot left blue ppo curve clear epg stable suggests efficient adaptive exploration type used epg important stability even relatively simple domain related work section discuss relationship epg several methods ciosek whiteson sampling methods spg epg similarities vine sampling schulman uses intrinsically noisy monte carlo quadrature many samples however important differences first vine relies entirely reward rollouts use explicit critic means vine perform many independent rollouts requiring simulator reset second related difference vine uses actions estimation executes environment necessary purely monte carlo rollouts section shows need general explicit critic ultimately main weakness vine purely monte carlo method however example figure section shows even computationally expensive monte carlo method problem variance gradient estimator remains regardless baseline epg also related variance minimisation techniques interpolate two estimators however epg uses quadric linear critic crucial exploration furthermore completely eliminates variance inner integral opposed reducing direct way coping variance policy gradients simply reduce learning rate variance gradient would otherwise explode using adam kingma natural policy gradients kakade amari peters schaal trust region policy optimisation schulman proximal policy optimisation schulman adaptive step size method pirotta newton method furmston barber furmston parisi however results slow learning variance high sarsa known since introduction policy gradient methods sutton represent kind policy improvement opposed greedy improvement performed methods expected sarsa two main reasons improvement greedy maximisation operator may available continuous large discrete action spaces greedy step may large critic approximates value function argument method may converge faster need additional optimisation actor recently approaches combining features methods investigated newton method quadric actions used produce algorithm continuous domains previously tractable policy gradient methods discrete action spaces softmax family methods hybrid loss combining sarsa recently linked policy gradients via entropy term donoghue paper gpg exploration section seen another kind hybrid specifically changes mean policy slowly similar vanilla policy gradient method computes covariance greedily similar sarsa expected policy gradients dpg update policy mean obtained corollary dpg update linking two methods formalise equivalences epg dpg first epg method linear critic arbitrary critic approximated first term taylor expansion equivalent dpg actions given state drawn exploration policy form pdf exploration noise must depend policy parameters fact follows directly lemma says essence linear critic gives information shift mean policy information moments second gpg quadric critic arbitrary critic approximated first two terms taylor expansion equivalent dpg gaussian exploration policy covariance computed section follows corollary third generally critic necessarily quadric dpg kind epg particular choice quadrature using dirac measure follows theorem surprisingly means dpg normally considered also seen exploring gaussian noise defined quadric critic noise linear critic furthermore compatible critic dpg silver indeed linear actions hence relationship holds whenever dpg uses compatible furthermore lemma lends new legitimacy common practice replacing critic required dpg theory approximates one approximates done spg epg methods spg sometimes includes entropy term peters gradient order aid exploration making policy stochastic gradient differential entropy policy state defined log log log log log log log log notion compatibility critic different stochastic deterministic policy gradients discrete action spaces derivation integrals replaced sums holds entropy ciosek whiteson typically add entropy update policy gradient update weight log log equation makes clear performing entropy regularisation equivalent using different critic shifted log holds epg spg including spg discrete actions integral actions replaced sum follows adding entropy regularisation objective optimising total discounted reward setting corresponds shifting reward function term proportional log neu nachum indeed path consistency learning algorithm nachum contains formula similar though obtained independently next derive specialisation case parameters shared actor critic start policy gradient identity given replace true critic approximate critic since holds stochastic policy choose one form continuous case assume integral converges state assume approximate critic parameterised form policy parameterised well policy class given simplify gradient update even obtaining log log log log log log derivation could drop term log since depend baseline shows case sharing parameters critic policy methods mnih entropy loss policy gradient loss redundant since entropy regularisation nothing except scale learning alternatively shared parameterisation policy gradient method simply subtracts entropy policy practice means policy gradient method kind parameter sharing quite similar learning critic alone simply acting according argmax values rather representing policy explicitly producing method similar sarsa argument ignore effects sampling exploration expected policy gradients learning policy gradients typically follows framework actorcritic degris denote behaviour policy corresponding measure method uses following reweighting approximation log log approximation necessary since samples generated using policy known approximate integral samples easy integral natural version epg emerges approximation see algorithm simply replaces inner integral log use analytic solution importance sampling term appear integral computed analytically sampling much less sampling importance correction course algorithm also requires critic importance sampling correction typically necessary indeed makes clear differs spg two places use use monte carlo estimator rather regular monte carlo inner integral algorithm expected policy gradients reweighting approximation initialise optimiser initialise policy parameterised converged estimated policy gradient per critic algorithm end value gradient methods value gradient methods fairbank fairbank alonso heess assume parameterisation policy policy gradients parameterised maximise recursively computing gradient value function notation policy gradient following connection value gradient ciosek whiteson initial state value gradient methods use recursive equation computes using successor state practice means trajectory truncated computation goes backward last state way applied resulting estimate used update policy recursive formulae based differentiated bellman equation different value gradient methods differ form recursive update value gradient obtained example stochastic value gradients svg introduce reparameterisation denote base noise distributions deterministic functions function thought mdp transition model svg rewrites using reparameterisation follows quantities computed chain rule known reward model transition model svg learns approximate samples using approximation obtain model value gradient recursion contrast derive related simpler value gradient method require model reparameterised starting log log approximated samples log log svg svg require model policy reparameterisation svg requires policy reparameterisation however svg inefficient since directly use reward computation value gradient expected policy gradients policy class normal policy squashing none expit none analytic update table summary useful analytic results expected policy gradients bounded action spaces assume bounding interval pair corresponds action taken successor state method requires learning critic svg requires model additional connection value gradient methods policy gradients since quantity theorem written think theorem showing obtain policy gradient value gradient without backwards iteration conclusions paper proposed new framework reasoning policy gradient methods called expected policy gradients epg integrates across action selected stochastic policy thus reducing variance compared existing stochastic policy gradient methods proved new general policy gradient theorem subsuming stochastic deterministic policy gradient theorems covers reasonable class policies showed analytical results policy update exist common cases lead practical algorithm analytic updates summarised table also gave universality results state certain broad conditions quadrature required epg performed analytically gaussian policies also developed novel approach exploration infers exploration covariance hessian critic analysis epg yielded new insights dpg delineated links two methods also discussed connections epg common techniques notably sarsa entropy regularisation finally evaluated gpg algorithm six practical domains showing outperforms existing techniques acknowledgments project received funding european research council erc european union horizon research innovation programme grant agreement number experiments made possible generous equipment grant nvidia ciosek whiteson appendix proofs detailed definitions first prove two lemmas concerning measure implicitly realised time far could find never proved explicitly definition occupancy definition truncated trajectory define trajectory truncated steps observation expectation wrt truncated trajectory since associated density dsn function definition expectation respect infinite trajectory bounded function lim sum side part symbol defined observation property expectation respect infinite trajectory lim limn bounded function definition occupancy measure expected policy gradients measure normalised general intuitively thought marginalising time system dynamics lemma property bounded function proof first equality follows observation property useful since expression left easily manipulated expression right estimated samples using monte carlo lemma generalised eigenfunction property bounded function proof dsds first equality follows form definition second one definition last equality follows definition definition markov reward process markov reward process tuple transition kernel distribution initial states reward distribution conditioned state discount constant mrp thought mdp fixed policy dynamics given marginalising actions since paper considers case one policy abuse notation slightly using symbol denote trajectories including actions without ciosek whiteson lemma second moment bellman equation consider markov reward process markov process probability density denote value function mrp denote second moment function value function mrp deterministic random variable given proof exactly bellman equation mrp theorem follows since bellman equation uniquely determines value function observation dominated value functions consider two markov reward processes markov process common mrps deterministic random variables meeting condition every value functions respective mrps satisfy every moreover states inequality value functions strict proof follows trivially expanding value function series comparing series elementwise computation moments exponential family consider moment generating function denote exponential family form given equation note occupies place definition mrp usually called reward distribution using symbol since shall apply lemma xes constructions distinct reward mdp solving expected policy gradients finite neighbourhood origin bickel doksum hence cross moments obtained denoted size sufficient statistic length vector however seek contains subset indices correspond vector simply use corresponding indices equation hand case introduce extended distribution vector concatenation use mgf restricted suitable set indices get moments references amari natural gradient works efficiently learning neural computation asadi allen roderick mohamed konidaris littman mean actor critic arxiv september leemon baird residual algorithms reinforcement learning function approximation proceedings twelfth international conference machine learning pages shalabh bhatnagar mohammad ghavamzadeh mark lee richard sutton incremental natural algorithms advances neural information processing systems pages peter bickel kjell doksum mathematical statistics basic ideas selected topics vol edition prentice hall edition isbn greg brockman vicki cheung ludwig pettersson jonas schneider john schulman jie tang wojciech zaremba openai gym arxiv preprint kamil ciosek shimon whiteson expected policy gradients aaai proceedings aaai conference artificial intelligence february john crassidis john junkins optimal estimation dynamic systems crc press thomas degris martha white richard sutton arxiv preprint michael fairbank learning phd thesis city university london michael fairbank eduardo alonso learning neural networks ijcnn international joint conference pages ieee ciosek whiteson thomas furmston david barber unifying perspective parametric policy search methods markov decision processes advances neural information processing systems pages thomas furmston guy lever david barber approximate newton methods policy search markov decision processes journal machine learning research shixiang timothy lillicrap zoubin ghahramani richard turner sergey levine policy gradient critic arxiv preprint shixiang timothy lillicrap ilya sutskever sergey levine continuous deep qlearning acceleration international conference machine learning pages nicolas heess gregory wayne david silver tim lillicrap tom erez yuval tassa learning continuous control policies stochastic value gradients advances neural information processing systems pages riashat islam peter henderson maziar gomrokchi doina precup reproducibility benchmarked deep reinforcement learning tasks continuous control arxiv preprint sham kakade natural policy gradient advances neural information processing systems pages diederik kingma jimmy adam method stochastic optimization arxiv preprint michail lagoudakis ronald parr policy iteration journal machine learning research dec weiwei emanuel todorov iterative linear quadratic regulator design nonlinear biological movement systems icinco pages timothy lillicrap jonathan hunt alexander pritzel nicolas heess tom erez yuval tassa david silver daan wierstra continuous control deep reinforcement learning arxiv preprint volodymyr mnih adria puigdomenech badia mehdi mirza alex graves timothy lillicrap tim harley david silver koray kavukcuoglu asynchronous methods deep reinforcement learning international conference machine learning pages ofir nachum mohammad norouzi kelvin dale schuurmans bridging gap value policy based reinforcement learning arxiv preprint expected policy gradients gergely neu anders jonsson unified view markov decision processes arxiv preprint brendan donoghue remi munos koray kavukcuoglu volodymyr mnih combining policy gradient simone parisi matteo pirotta marcello restelli reinforcement learning continuous pareto manifold approximation journal artificial intelligence research jan peters stefan schaal policy gradient methods robotics intelligent robots systems international conference pages ieee jan peters stefan schaal natural neurocomputing jan peters stefan schaal reinforcement learning motor skills policy gradients neural networks jan peters katharina yasemin altun relative entropy policy search aaai pages atlanta matteo pirotta marcello restelli luca bascetta adaptive policy gradient methods advances neural information processing systems pages martin puterman markov decision processes discrete stochastic dynamic programming john wiley sons michael roth gustaf hendeby fredrik gustafsson nonlinear kalman filters explained tutorial moment computations sigma point methods journal advances information fusion gavin rummery mahesan niranjan using connectionist systems university cambridge department engineering john schulman sergey levine pieter abbeel michael jordan philipp moritz trust region policy optimization proceedings international conference machine learning pages john schulman filip wolski prafulla dhariwal alec radford oleg klimov proximal policy optimization algorithms arxiv preprint david silver guy lever nicolas heess thomas degris daan wierstra martin riedmiller deterministic policy gradient algorithms icml marshall stone generalized weierstrass approximation theorem mathematics magazine richard sutton generalization reinforcement learning successful examples using sparse coarse coding advances neural information processing systems pages ciosek whiteson richard sutton andrew barto reinforcement learning introduction volume mit press cambridge richard sutton david mcallester satinder singh yishay mansour policy gradient methods reinforcement learning function approximation advances neural information processing systems pages emanuel todorov tom erez yuval tassa mujoco physics engine modelbased control intelligent robots systems iros international conference pages ieee george uhlenbeck leonard ornstein theory brownian motion physical review harm van seijen hado van hasselt shimon whiteson marco wiering theoretical empirical analysis expected sarsa adprl proceedings ieee symposium adaptive dynamic programming reinforcement learning pages march url http karl weierstrass die analytische darstellbarkeit sogenannter functionen einer reellen sitzungsberichte der akademie der wissenschaften berlin | 2 |