text
stringlengths
16
1.15M
label
int64
0
10
constrained submodular maximization via technique nov niv moran november abstract study combinatorial optimization problems submodular objective attracted much attention recent years problems important theory practice objective functions general obtaining improvements many submodular maximization problems boils finding better algorithms optimizing relaxation known multilinear extension work present algorithm optimizing multilinear relaxation whose guarantee improves guarantee best previous algorithm given ene nguyen moreover algorithm based new technique arguably simpler natural problem hand nutshell previous algorithms problem rely symmetry properties natural absence constraint technique avoids need resort properties thus seems better fit constrained problems department statistics operations research school mathematical sciences tel aviv university israel email depart mathematics computer science open university israel email moranfe introduction study combinatorial optimization problems submodular objective attracted much attention recent years problems important theory practice objective functions functions generalize example cuts functions graphs directed graphs mutual information function matroid weighted rank functions specifically theoretical perspective many problems combinatorial optimization fact submodular maximization problems including generalized assignment facility location practical perspective submodular maximization problems found uses social networks vision machine learning many areas reader referred example comprehensive survey bach techniques used approximation algorithms submodular maximization problems usually fall one two main approaches first approach combinatorial nature mostly based local search techniques greedy rules approach used early late maximizing monotone submodular function subject matroid constraint works apply specific types matroids later works used approach handle also problems submodular objective functions different constraints yielding cases optimal algorithms however algorithms based approach tend highly tailored specific structure problem hand making extensions quite difficult second approach used approximation algorithms submodular maximization problems overcomes obstacle approach resembles common paradigm designing approximation algorithms involves two steps first step fractional solution found relaxation problem known multilinear relaxation second step fractional solution rounded obtain integral one incurring bounded loss objective approach used obtain improved approximations many problems various techniques developed rounding fractional solution techniques tend quite flexible usually extend many related problem particular contention resolution schemes framework yields rounding procedure every constraint presented intersection basic constraints knapsack constraints matroid constraints matching constraints given wealth rounding procedures obtaining improvements many important submodular maximization problems maximizing submodular function subject matroid knapsack constraint boils obtaining improved algorithms finding good fractional solution optimizing multilinear relaxation maximizing multilinear relaxation point would like present terms formally submodular function set function obeying sets submodular maximization problem problem finding set maximizing subject constraint formally let set subsets obeying constraint interested following problem max relaxation problem replaces polytope containing characteristic vectors sets addition relaxation must replace function extension function thus relaxation fractional problem following format max defining right extension function relaxation challenge unlike linear case single natural candidate objective turned useful thus used multilinear relaxation known multilinear extension first introduced value extension vector defined expected value random subset containing every element independently probability formally every first algorithm optimizing multilinear relaxation continuous greedy algorithm designed calinescu submodular function algorithm finds vector set maximizing among sets whose characteristic vectors belongs interestingly guarantee continuous greedy optimal monotone functions even simple cardinality constraint optimizing multilinear relaxation necessarily monotone proved challenging task initially several algorithms specific polytopes suggested later improved general algorithms designed work whenever solvable designing algorithms work general setting highly important many natural constraints fall framework moreover restriction algorithms polytopes unavoidable proved algorithm produce vector obeying constant solvable recently best algorithm general setting called measured continuous greedy guaranteed produce vector obeying natural feel guarantee measured continuous greedy fact improved years made people suspect optimal recently evidence conjecture given described algorithm special case cardinality constraint improved approximation guarantee even recently ene nguyen shuttered conjecture completely extending technique used showed one get approximation guarantee every solvable polytope inapproximability side oveis gharan proved algorithm achieve approximation better even matroid polytope partition matroid closing gap best algorithm inapproximability result fundamental problem remains important open problem upper set function monotone every polytope solvable one optimize linear functions polytope implies every vector bounded must belong well contribution main contribution algorithm improved guarantee maximizing multilinear relaxation theorem exists polynomial time algorithm given submodular function solvable polytope finds vector obeying arg max multilinear extension admittedly improvement guarantee obtained algorithm compared guarantee relatively small however technique underlying algorithm different arguably much cleaner technique underlying previous results improving natural guarantee moreover believe technique natural problem hand thus likely yield improvements future rest section explain intuition base belief results based observation guarantee measured continuous greedy improves algorithm manages increase coordinates solution slow rate based observation run instance measured continuous greedy discretized version force raise coordinates slowly extra restriction affect behavior algorithm significantly produces solution improved guarantee otherwise argue point extra restriction affect behavior measured continuous greedy reveals vector contains significant fraction available one use technique unconstrained submodular maximization described higher approximation guarantee extract vector large value guarantees belongs well unfortunately use unconstrained submodular maximization technique approach problematic two reasons first technique based ideas different ideas used analysis measured continuous greedy makes combination two quite involved second abstract level unconstrained submodular maximization technique based symmetry exists absence constraint since submodular whenever properties however symmetry breaks constraint introduced thus unconstrained submodular maximization technique seem good fit constrained problem algorithm replaces symmetry based unconstrained submodular maximization technique local search algorithm specifically first executes local search algorithm output local search algorithm good algorithm simply returns otherwise observe poor value output local search algorithm guarantees also far sense algorithm uses far solution guide instance measured continuous greedy help avoid bad decisions turns analysis measured continuous greedy local search algorithm use similar ideas notions thus two algorithms combine quite cleanly observed section preliminaries analysis uses another useful extension submodular functions given submodular function extension function defined extension many important applications see however paper use context following known result immediate corollary work lemma given multilinear extension extension submodular function holds every vector define additional notation use given set element denote characteristic vectors sets respectively sets respectively given two vectors denote maximum minimum multiplication respectively finally given vector element denote derivative respect point following observation gives simple formula observation holds multilinear function observation let multilinear extension submodular function every rest paper assume without loss generality every element larger given constant first assumption justified observation every element violating assumption safely removed since belong second assumption justified observation possible find set obeying constant time constant another issue needs kept mind representation submodular functions interested algorithms whose time complexity polynomial however representation submodular function might exponential size thus assume representation given part input algorithm standard way bypass difficulty assume algorithm access oracle assume standard value oracle used previous works submodular maximization oracle returns given subset value main algorithm section present algorithm used prove theorem algorithm uses two components first component close variant fractional local search algorithm suggested chekuri following properties formally every element max min lemma follows chekuri exists polynomial time algorithm returns vector high probability every vector proof let max let arbitrary constant larger lemmata chekuri imply high probability fractional local search algorithm suggest terminates polynomial time outputs vector obeying every vector moreover output vector whenever fractional local search algorithm terminates assumption every element implies submodularity every set since maximum values get also using observation plugging get exists algorithm high probability terminates operations polynomial function every vector outputs vector obeying moreover output vector belongs whenever algorithm terminates complete lemma consider procedure executes algorithm operations return output terminates within number operations algorithm fails terminate within number operations happens diminishing probability procedure simply returns always belongs since one observe procedure properties guaranteed lemma second component algorithm new auxiliary algorithm present analyze section auxiliary algorithm main technical contribution paper guarantee given following theorem theorem exists polynomial time algorithm given vector value outputs vector obeying ets main algorithm executes algorithms suggested lemma followed algorithm suggested theorem notice second algorithms two parameters addition parameter set output first algorithm parameter set constant determined later two algorithms terminate algorithm returns output first algorithm probability constant determined later remaining probability returns output second formal description algorithm given algorithm observe lemma theorem imply together algorithm polynomial time algorithm always outputs vector prove theorem remains analyze quality solution produced algorithm clearly always better return better two solution instead randomizing however require algorithm either oracle access estimate values solutions using sampling later done using standard sake simplicity chose easier analyze approach randomizing two solutions algorithm main algorithm execute algorithm suggested lemma let output execute algorithm suggested theorem let output return probability solution solution otherwise lemma parameters set algorithm produces solution whose expected value least proof let event output algorithm suggested lemma satisfies inequality since high probability event enough prove conditioned algorithm produces solution whose expected value least constant rest proof lemma devoted proving last claim throughout everything implicitly conditioned conditioning plug respectively inequality get last inequality follows noticing next let denote expected value conditioned given value inequality guarantees ets recall algorithm returns probability otherwise hence expected value output expectation optimizing constants would like derive inequalities best lower bound get end let two numbers let using inequalities notation lower bounded ets rewritten ets ets ets get lower bound need maximize coefficient keeping coefficients ignored due objective formalized following program max ets ets ets solving program get best solution approximately objective function value corresponding solution least hence managed lower bound thus also expected value output algorithm completes proof lemma aided measured continuous greedy section present algorithm used prove theorem proving theorem directly made involved fact vector might fractional instead prove following simplified version theorem integral values show simplified version implies original one theorem exists polynomial time algorithm given set value outputs vector obeying ets next promised proof theorem implies theorem proof theorem given theorem consider algorithm alg given arguments specified theorem executes algorithm guaranteed theorem value random set distributed like output alg output produced algorithm guaranteed theorem let denote output theorem guarantees every given ets complete proof take expectation two sides last inequality observe rest section give proof theorem proof explains main ideas necessary proving theorem uses simplifications allowing direct oracle access multilinear extension giving algorithm form continuous time algorithm implemented discrete computer known techniques getting rid simplifications see formal proof theorem based techniques given appendix algorithm use proof theorem given algorithm algorithm starts empty solution time grows solution time reaches final solution time way solution grows varies time time range solution grows like measured continuous greedy algorithm hand earlier time range algorithm pretends elements exist giving negative marginal profits grows solution way measured continuous greedy would grown given ground set value time algorithm switches two ways uses grow solution thus notation stands switch algorithm aided measured continuous greedy let foreach let arg let arg increase rate return first note algorithm outputs vector observation proof observe eachr time implies also since therefore convex combination vectors thus belongs following lemma lower bounds increase function lemma every proof chain rule dyu consider first case time period algorithm chooses vector maximizing since value thus plugging observation equality yields last inequality holds submodularity similarity algorithm chooses vector maximizing since get time plugging observation equality yields last inequality holds submodularity lemma every time set holds max max proof first note every time element max follows following reason since always obeys differential inequality using initial condition solution differential inequality get tighter bound note every time algorithm chooses vector maximizing linear function assigns negative weight elements since maximum must every element means whenever moreover plugging improved initial condition differential inequality yields promised tighter bound also range next let extension lemma max max max max inequality follows equality follows since inequality guarantees every thus finally inequality follows since max inequality guarantees every thus also submodularity every set plugging results lemma lower bound given lemma improvement function yields immediately useful lower bound given next corollary every ets ets using last corollary complete proof theorem proof theorem already seen output algorithm remains show ets corollary describes differential inequality given boundary condition solution differential inequality within range plugging last inequality get let right hand side last inequality next solve differential inequality given corollary range boundary condition resulting solution ets ets vets plugging value get ets ets vets ets ets ets inequality follows since submodularity note corollary follows weaker version lemma guarantees proved stronger version lemma useful formal proof theorem given appendix max references ageev sviridenko approximation algorithm uncapacitated facility location problem discrete appl july noga alon joel spencer probabilistic method wiley interscience second edition per austrin siavosh benabbas konstantinos georgiou better balance biased max bisection soda pages francis bach learning submodular functions convex optimization perspective foundations trends machine learning boykov jolly interactive graph cuts optimal boundary region segmentation objects images iccv volume pages niv buchbinder moran feldman joseph seffi naor roy schwartz tight linear time unconstrained submodular maximization focs pages niv buchbinder moran feldman joseph seffi naor roy schwartz submodular maximization cardinality constraints soda pages gruia chandra chekuri martin jan maximizing monotone submodular function subject matroid constraint siam chandra chekuri alina ene approximation algorithms submodular multiway partition focs pages chandra chekuri sanjeev khanna polynomial time approximation scheme multiple knapsack problem siam september chandra chekuri jan rico zenklusen dependent randomized rounding via exchange properties combinatorial structures focs pages chandra chekuri jan rico zenklusen submodular function maximization via multilinear relaxation contention resolution schemes stoc pages chandra chekuri jan rico zenklusen submodular function maximization via multilinear relaxation contention resolution schemes siam reuven cohen liran katzir danny raz efficient approximation generalized assignment problem information processing letters conforti submodular set functions matroids greedy algorithm tight worstcase bounds generalizations radoedmonds theorem disc appl cornuejols fisher nemhauser location bank accounts optimize float analytic study exact approximate algorithms management sciences cornuejols fisher nemhauser uncapacitated location problem annals discrete mathematics alina ene huy nguyen constrained submodular maximization beyond focs uriel feige threshold approximating set cover acm uriel feige michel goemans aproximating value two prover proof systems applications max max dicut istcs pages uriel feige vahab mirrokni jan maximizing submodular functions siam journal computing uriel feige jan approximation algorithms allocation problems improving factor focs pages moran feldman maximization problems submodular objective functions phd thesis technion israel institute technology june moran feldman joseph naor roy schwartz unified continuous greedy algorithm submodular maximization focs pages moran feldman joseph seffi naor roy schwartz justin ward improved approximations systems esa pages fisher nemhauser wolsey analysis approximations maximizing submodular set functions polyhedral combinatorics volume mathematical programming studies pages springer berlin heidelberg lisa fleischer michel goemans vahab mirrokni maxim sviridenko tight approximation algorithms maximum general assignment problems soda pages alan frieze mark jerrum improved approximation algorithms max max bisection ipco pages shayan oveis gharan jan submodular maximization simulated annealing soda pages michel goemans david williamson improved approximation algorithms maximum cut satisfiability problems using semidefinite programming journal acm eran halperin uri zwick combinatorial approximation algorithms maximum directed cut problem soda pages jason hartline vahab mirrokni mukund sundararajan optimal marketing strategies social networks www pages johan optimal inapproximability results acm july hausmann korte algorithms independence systems oper res ser hausmann korte jenkyns worst case analysis greedy type algorithms independence systems math prog study jegelka bilmes submodularity beyond submodular energies coupling edges graph cuts ieee conference computer vision pattern recognition jenkyns efficacy greedy algorithm cong richard karp reducibility among combinatorial problems miller thatcher editors complexity computer computations pages plenum press david kempe jon kleinberg tardos maximizing spread influence social network sigkdd pages subhash khot guy kindler elchanan mossel ryan donnell optimal inapproximability results csps siam april khuller moss naor budgeted maximum coverage problem information processing letters korte hausmann analysis greedy heuristic independence systems annals discrete andreas krause ajitsingh carlos guestrin sensor placements gaussian processes theory efficient algorithms empirical studies mach learn january andreas krause carlos guestrin nonmyopic value information graphical models uai page andreas krause jure leskovec carlos guestrin jeanne vanbriesen christos faloutsos efficient sensor placement optimization securing large water distribution networks journal water resources planning management november ariel kulik hadas shachnai tami tamir approximations monotone nonmonotone submodular maximization knapsack constraints math oper jon lee vahab mirrokni viswanath nagarajan maxim sviridenko maximizing nonmonotone submodular functions matroid knapsack constraints siam journal discrete mathematics jon lee maxim sviridenko jan submodular maximization multiple matroids via generalized exchange properties approx pages hui lin jeff bilmes summarization via budgeted maximization submodular functions north american chapter association computational language technology conference los angeles june hui lin jeff bilmes class submodular functions document summarization hlt pages submodular functions convexity bachem korte editors mathematical programming state art pages springer schrijver ellipsoid method consequences combinatorial optimization combinatoria nemhauser wolsey best algorithms approximating maximum submodular set function mathematics operations research nemhauser wolsey fisher analysis approximations maximizing submodular set functionsi mathematical programming maxim sviridenko note maximizing submodular set function subject knapsack constraint operations research letters luca trevisan gregory sorkin madhu sudan david williamson gadgets approximation linear programming siam april jan symmetry approximability submodular maximization problems siam formal proof theorem section give formal proof theorem proof based ideas used proof theorem section employs also additional known techniques order get rid issues make proof section algorithm use prove theorem given algorithm algorithm discrete variant algorithm reading algorithm important observe choice values guarantees variable takes one values point thus vectors well defined algorithm aided measured continuous greedy initialization let let growing foreach let estimate obtained averaging values independent samples arg let arg let let update return begin analysis algorithm showing remains within cube throughout execution algorithm without observation algorithm welldefined observation every value proof prove observation induction clearly observation holds assume observation holds time every inequality holds since induction hypothesis implies similar argument also implies using last observation possible prove following counterpart observation corollary algorithm always outputs vector proof let set values takes execution algorithm observe implies convex combination vectors vectors belong convex convex combination including must next rewrite output algorithm discussion rightmost hand side inequality vector implies since next step towards showing algorithm proves theorem analyzing approximation ratio start analysis showing high probability estimations made algorithm quite accurate let event every time lemma symmetric version theorem let mutually independent set corollary proof consider calculation given time calculation done averaging value independent samples let denote value definition obtained sample let since distributed like definition guarantees every additionally every since absolute values upper bounded last inequality follows assumption every element thus lemma observe algorithm calculates every combination element time since elements times smaller union bound implies probability least one value upper bounded completes proof corollary next step give lower bound increase function given lower bound given corollary follows next two lemmata statement proof corollary next lemma easier following definition let denote set otherwise lemma given every time proof let calculate weight according weight function first inequality follows definition last follows submodularity recall vector maximizing objective function depends objective function maximized assigns value vector similarly objective function maximized assigns value vector thus definition guarantees cases hence first inequality holds definition second equality holds since lemma rephrased version lemma consider two vectors every corollary given every time proof observe every hence lemma max max consider rightmost hand side last inequality lemma first term side bounded hand second term bounded max since definition assumption every lower bound given last corollary terms make lower bound useful need lower bound term done following two lemma corresponds lemma lemma corresponds lemma every time set holds max max proof lemma goes along lines proof corresponding lemma section except bounds coordinates used proof section replaced slightly weaker bounds given following lemma lemma every time element max proof let observe every time first objective prove induction time every time claim holds next assume claim holds let prove last inequality holds since decreasing function complete proof case choosing clearly observing every time remains prove lemma case note every time algorithm chooses vector maximizing linear function assigns negative weight elements since maximum must element means addition proving lemma time range last inequality also allows choose gives ets combining corollary lemma gives following corollary corollary given every time every time ets max proof every time corollary lemma imply together max observe inequality identical inequality promised time range corollary except extra term right hand side since upper bounded due absolute value extra term completes proof time range consider time range time range corollary lemma imply together ets max observe inequality identical inequality promised time range corollary except extra terms max right hand side corollary follows since absolute value terms upper bounded corollary bounds increase terms thus gives recursive formula used lower bound remaining task solve formula get lower bound let defined follows every time ets max next lemma shows lower bound yields lower bound lemma given every time proof let larger constant among constants hiding behind big notations corollary prove induction clearly holds since assume claim holds let prove two cases consider induction hypothesis corollary imply large enough similarly get ets max ets max ets max ets max remains find expression lower bounds thus also let defined follows ets max ets lemma every time proof proof induction assume lemma holds let prove holds also induction hypothesis first inequality holds since decreasing function increasing function range lemma every time proof proof induction lemma assume lemma holds let prove holds also avoid repeating complex expressions let denote ets max notice independent moreover using notation rewrite ets thus every ets ets definition imply immediately would like prove also ets two cases consider first ets ets max ets ets ets ets ets ets ets ets inequality uses fact hand ets ets ets ets ets using observations induction hypothesis get ets ets last two lemmata give promised lower bound used lower bound approximation ratio algorithm corollary ets proof lemma given lemma ets max ets ets ets ets ets ets ets second inequality holds since submodularity imply combining observations get given ets since always implies law total expectation ets ets ets ets second inequality holds since corollary theorem follows immediately combining corollaries
8
self organizing maps whose topologies learned adaptive binary search trees using conditional rotations jun astudillo john abstract numerous variants maps soms proposed literature including also possess underlying structure cases structure defined user although concepts growing som updating studied whole issue using adaptive data structure ads enhance properties underlying som unexplored earlier work impose arbitrary topology onto codebooks consequently enforced neighborhood phenomenon bubble activity boa paper consider underlying tree rendered dynamic adaptively transformed present methods som underlying binary search tree bst structure adaptively using conditional rotations conrot rotations nodes tree local done constant time performed decrease weighted path length wpl entire tree introduce pioneering concept referred neural promotion neurons gain prominence neural network significance increases aware research deals issue neural promotion advantages scheme user need aware topological peculiarities stochastic data distribution rather algorithm referred ttosom conditional rotations ttoconrot converges manner neurons ultimately placed input space represent stochastic distribution additionally neighborhood properties neurons suit best bst represents data properties confirmed experimental results variety data sets submit concepts novel pioneering sort keywords adaptive data structures binary search trees maps universidad talca merced chile castudillo author assistant professor department computer science universidad talca work partially supported fondecyt grant chile preliminary version paper presented australasian joint conference artificial intelligence melbourne australia december paper award best paper conference also grateful comments made associate editor anonymous referees input helped improving quality final version paper thank much school computer science carleton university ottawa canada oommen chancellor professor fellow ieee fellow iapr author also adjunct professor university agder grimstad norway work author partially supported nserc natural sciences engineering research council canada introduction paper pioneering attempt merge areas maps soms theory adaptive data structures adss put nutshell describe goal paper follows consider som neurons rather neurons merely possess information feature space also attempt link together means underlying data structure could list list binary search tree bst etc intention neurons governed laws som underlying observe concepts neighborhood bubble activity boa based nearness neurons feature space rather proximity underlying accepted premise intent take entire concept higher level abstraction propose modify adaptively using operations specific far know combination concepts unreported literature proceed place results right perspective probably wise see concept neighborhood defined som literature kohonen book mentions possible distinguish two basic types neighborhood functions first family involves kernel function usually gaussian nature second neighborhood set also known bubble activity boa paper focuses second type neighborhood function even though traditional som dependent neural distance estimate subset neurons incorporated boa always case included literature indeed different strategies described utilize families schemes define boa mainly identify three first type boa uses concept neural distance case traditional som best matching unit bmu identified neural distance calculated traversing underlying structure holds neurons important property neural distance two neurons proportional number connections separating examples strategies use neural distance determine boa growing cell structures gcs growing grid incremental grid growing igg growing som gsom som tssom hierarchical feature map hfm growing hierarchical som ghsom selforganizing tree algorithm sota evolving tree topology oriented som ttosom among others second subset strategies employ scheme determining boa depend connections instead strategies utilize distance feature space cases possible distinguish two types neural networks nns simplest situation occurs boa considers bmu constitutes instance hard competitive learning case tsvq tree map sotm sophisticated computationally expensive scheme involves ranking neurons per respective distances stimulus scenario boa determined selecting subset closest neurons example som variant uses ranking neural gas according authors variants included literature attempt tackle two main goals either try design flexible topology usually useful analyze large datasets reduce task required som namely search bmu input set complex nature paper focus former two mentioned goals words goal enhance capabilities original som algorithm represent underlying data distribution structure accurate manner also intend constraining neurons related based neural indices stochastic distribution also based bst relationship furthermore long term ambition also anticipate methods accelerate task locating nearest neuron phase work present details design implementation adaptive process applied bst integrated som regardless fact numerous variants som devised possess ability modifying underlying topology moreover small subset use tree underlying strategies attempt dynamically modify nodes som example adding nodes single neuron layer however hypothesis also possible attain better understanding unknown data distribution performing structural modifications tree although preserve general topology attempt modify overall configuration altering way nodes interconnected yet continue bst accomplish dynamically adapting edges connect neurons nodes within bst holds whole structure neurons explain later achieved local modifications overall structure constant number steps thus attempt use rotations neighbors feature space improve quality som motivations acquiring information set stimuli unsupervised manner usually demands deduction structure general topology employed artificial neural network ann possessing ability important impact manner absorb display properties input set consider example following user may want devise algorithm capable learning distribution one depicted figure som tries achieve defining underlying topology fit grid within overall shape shown figure duplicated however perspective topology naturally fit distribution thus one experiences deformation original lattice modeling phase opposed figure shows result applying one techniques developed namely ttosom reader observe figure tree seems far superior choice representing particular shape question operation rotation one associated bsts presently explained grid learned som tree learned ttosom figure distribution learned unsupervised learning closer inspection figure depicts complete tree fills triangle formed set stimuli seems uniformly final position nodes tree suggests underlying structure data distribution corresponds triangle additionally root tree placed roughly center mass triangle also interesting note three main branches tree cover areas directed towards vertex triangle respectively fill surrounding space around recursive manner identify behavior course triangle figure serves simple prima facie example demonstrate reader informal manner techniques try learn set stimuli indeed problems techniques employed extract properties samples one argue imposing initial topological configuration accordance founding principles unsupervised learning phenomenon supposed occur without supervision within human brain initial response argue supervision required enhance training phase information provide relates initialization phase indeed line principle little automatically learned data distribution assumptions made next step motivating research endeavor venture world neural topology structure learned training process achieved method propose paper namely ttosom conditional rotations ttoconrot essence dynamically extends properties ttosom accomplish need key concepts completely new field soms namely related adaptive data structure ads indeed demonstrated experiments results already obtained applauded research best knowledge remained unreported literature another reason interested integration deals issue devising efficient methods add neurons tree even though schemes currently proposing mentioned earlier paper reported preliminary results study best paper award international conference paper focus tree adaptation means rotations envision another type dynamism one involves expansion tree structure insertion newly created nodes considers different strategies expand trees inserting nodes single neuron essentially based quantization error measure strategies error measure based hits number times neuron selected bmu strategy chosen adapting tree namely using conditional rotations conrot already utilizes bmu counter distinct previous strategies attempt search node expanded case soms usually level leaves foresee advocate different approach ttoconrot method asymptotically positions frequently accessed nodes close root according property root node split observe follow philosophy one would search node higher measure rather conrot hopefully able migrate candidates closer root course works assumption larger number hits indicates degree granularity particular neuron justifies refinement concept using root tree growing som pioneering far know contributions paper contributions paper summarized follows present integration fields soms ads respectfully submit pioneering neurons som linked together using underlying governed laws ttosom paradigm simultaneously restructuring adaptation provided conrot definition distance neurons based tree structure feature space valid also boa rendering migrations distinct adaptive nature ttoconrot unique adaptation perceived two forms migration codebook vectors feature space consequence som update rule rearrangement neurons within tree result rotations organization paper rest paper organized follows next section surveys relevant involves field soms including instantiations respective field bsts conditional rotations section provide explanation ttoconrot philosophy primary contribution subsequent section shows capabilities approach series experiments finally section concludes paper sake space literature review considerably condensed however given survey paper area soms reported literature currently preparing paper summarizes field literature review som one important families anns used tackle clustering problems well known som typically som trained using supervised learning produce neural representation space whose dimension usually smaller training samples lie neurons attempt preserve topological properties input space som concentrates information contained set input samples belonging ddimensional space say utilizing much smaller set neurons represented vector neurons contains weight vector ird associated vectors synonymously called weights prototypes codebook vectors vector may perceived position neuron feature space training phase values weights adjusted simultaneously represent data distribution structure training step stimulus representative input sample data distribution presented network neurons compete identify winner also known best matching unit bmu identifying bmu subset neurons close considered within bubble activity boa depends parameter specified algorithm namely radius thereafter scheme performs migration codebooks within boa position closer sample examined migration factor update effected depends parameter known learning rate typically expected large initially decreases algorithm proceeds ultimately results migration algorithm describes details som philosophy algorithm parameters scheduled defining sequence corresponds tuple specifies learning rate radius fixed number training steps way parameters decay specified original algorithm alternatives parameters remain fixed decrease linearly exponentially etc algorithm som input input sample set schedule parameters method initialize weights randomly selecting elements repeat obtain sample find winner neuron one similar determine subset neurons close winner migrate closest neuron neighbors towards modify learning factor radius per schedule noticeable changes observed end algorithm although som demonstrated ability solve problems wide spectrum possesses fundamental drawbacks one drawbacks user must specify lattice priori effect must run ann number times obtain suitable configuration handicaps involve size maps lesser number neurons often represent data inaccurately approaches attempt render topology flexible represent complicated data distributions better way make process faster instance speeding task determining bmu vast number domain fields som demonstrated useful compendium articles take advantage properties som surveyed survey papers classify publications related som according year release report includes bibliography published year report includes analogous papers published additional recent references including related work year collected technical report recent literature reports host application domains including medical image processing human eye detection handwriting recognition image segmentation information retrieval object tracking etc soms although important number variants original som presented years focus attention specific family enhancements neurons using tree topology tsvq algorithm som variant whose topology defined priori static training first takes place highest levels tree tsvq incorporates concept frozen node implies node trained certain amount time becomes static algorithm allows subsequent units direct children trained strategy utilizes heuristic search algorithm rapidly identifying bmu starts root recursively traverses path towards leaves unit currently analyzed frozen algorithm identifies child closest stimulus performs recursive call algorithm terminates node currently analyzed frozen node currently trained returned bmu koikkalainen oja paper refine idea tsvq defining tssom inherits properties tsvq redefines search procedure boa case tssom som layers different dimensions arranged pyramidal shape perceived som different degrees granularity differs tsvq sense bmu found direct proximity examined check bmu hand boa differs instead considering bmu direct neighbors pyramid also considered tree algorithm sota dynamically growing som according authors take analogies growing cell structures gcs sota utilizes binary tree underlying structure similarly strategies tssom evolving tree explained considers migration neurons correspond leaf nodes within tree structure boa depends neural tree defined two cases general case occurs parent bmu root situation boa composed bmu sibling parent node otherwise boa constitutes bmu sota triggers growing mechanism utilizes determine node split two new descendants authors presented som called growing hierarchical som ghsom node corresponds independent som expansion structure dual first type adaptation conceived inserting new rows columns som grid currently trained second type implemented adding layers hierarchical structure types dynamism depend verification measures sotm som also inspired adaptive resonance theory art sotm input within threshold distance bmu latter migrated otherwise new neuron added tree thus sotm subset neurons migrated depends distance feature space neural distance som families authors proposed called evolving tree takes advantage procedure adapted one utilized tsvq identify bmu log time set neurons adds neurons dynamically incorporates concept frozen neuron explained node participate training process thus removed boa similar tsvq training phase terminates nodes become frozen topology oriented som ttosom central paper som node possess arbitrary number children furthermore assumed user ability tree whose topological configuration preserved training process ttosom uses particular boa includes nodes leaf ones within certain neural distance radius interesting property displayed strategy ability reproduce results obtained kohonen nodes som arranged linearly list case ttosom able adapt grid object way som algorithm phenomenon possessed prior hierarchical networks reported additionally original topology tree followed overall shape data distribution results reported also depicted motivational section showed also possible obtain symmetric topology codebook vectors recent work authors enhanced ttosom perform classification fashion method presented first learns data distribution unsupervised manner labeled instances become available clusters labeled using evidence according results presented number neurons required accurately predict category som possesses ability learn data distribution utilizing unidimensional topology neighbors defined along grid direction case one encounter unidimensional topology forms peano curve ttosom also possesses interesting property tree topology linear details achieved presented detail including explanation techniques fail achieve task novel data small portion cardinality input set merging ads ttosom adaptive data structures adss bsts one primary goals area ads achieve optimal arrangement elements placed nodes structure number iterations increases reorganization perceived automatic adaptive convergence tends towards optimal configuration minimum average access time cases probable element positioned root head tree rest tree recursively positioned manner solution obtain optimal bst well known access probabilities nodes known priori however research concentrates case access probabilities known priori setting one effective solution due cheetham uses concept conrot reorganizes bst asymptotically produce optimal form additionally unlike algorithms otherwise reported literature move done every data access operation performed overall weighted path length wpl resulting bst decreases bst may used store records whose keys members ordered set records stored way traversal tree yield records ascending order given set access probabilities problem constructing efficient bsts extensively studied optimal algorithm due knuth uses dynamic programming produces optimal bst using time space paper consider scenario access probability vector known priori seek scheme dynamically rearranges asymptotically generates tree minimizes access cost keys primitive tree restructuring operation used bst schemes well known operation rotation describe operation follows suppose exists node bst parent node left child right child function relates node parent exists also let relate node sibling node exists shares parent consider case left child see figure rotation performed node follows becomes right child becomes left child node nodes remain relative positions see figure case node right child treated symmetric manner operation effect raising promoting specified node tree structure preserving lexicographic order elements refer tree reorganizing use operation presented literature among simple exchange rules heuristic time record accessed rotations performed upwards direction becomes review necessary brief detailed version found tree rotation performed contents nodes data values case characters tree rotation performed node figure bst rotation performed root tree hand simple exchange rule rotates accessed element one level towards root sleator tarjan introduced technique also moves accessed record root tree using restructuring operation called splaying actually generalization rotation structure called splay tree shown amortized time complexity log complete set tree operations included insertion deletion access split join literature also records various schemes adaptively restructure tree aid additional memory locations prominent among monotonic tree mehlhorn dynamic version tree structuring method originally suggested knuth spite advantages schemes mentioned drawbacks serious others schemes one major disadvantage splaying rules always move accessed record root tree means arrangement reached single access record disarrange tree along entire access path element moved upwards root opposed schemes rule move accessed element root every time reported practice perform well weakness lies fact considers frequency counts records leads undesirable property single rotation may move subtree relatively large probability weight downwards thus increasing cost tree paper uses particular heuristic namely conditional rotations bst shown reorganize bst asymptotically arrive optimal form optimized version scheme referred algorithm requires maintenance single memory location per record keeps track number accesses subtree rooted record algorithm specifies accessed element rotated towards root tree minimize overall cost entire tree finally unlike algorithms currently literature move done every data access operation performed overall wpl resulting bst decreases essence algorithm attempts minimize wpl incorporating statistical information accesses various nodes subtrees rooted corresponding nodes basic condition rotation node wpl entire tree must decrease result single rotation achieved conditional rotation define concept conditional rotation define total number accesses subtree rooted node one biggest advantages heuristic requires maintenance processing values stored specific node direct neighbors parent children exist algorithm formally given algorithm describes process conditional rotations bst algorithm receives two parameters first corresponds pointer root tree second corresponds key searched assumed present tree node access requested algorithm seeks node root towards leaves algorithm input pointer root binary search tree search key assumed output restructured tree pointer record containing method true else end end return record else else end end end algorithm first task accomplished algorithm updating counter present node along path traversed next step consists determining whether node requested key found occurs quantities defined equations computed determine value quantity referred left child parent right descendant less zero upward rotation performed authors shown single rotation leads decrease overall wpl entire tree occur line algorithm method invoked parameter method pointer node method necessary operations required rotate node upwards means node left child parent equivalent performing right rotation parent analogously right child parent parent instead rotation takes place necessary update corresponding counters fortunately task involve updating rotated node counter parent last part algorithm namely lines deals search key case achieved recursively reader observe tasks invoked algorithm performed constant time worst case recursive call done root leaves leading running complexity height tree ttosom conditional rotations ttoconrot section concentrates details integration fields ads som particular ttosom although merging ads som relevant wide spectrum dss focus scope considering structures specifically shall concentrate integration heuristic ttosom explained preceding sections conceptually distinguish method namely topology oriented som conditional rotations ttoconrot components properties terms components detect five elements first ttoconrot set neurons like methods represents data space condensed manner secondly ttoconrot possesses connection neurons neighbor specific neuron based nearness measure third fourth components involve migration neurons similar reported families soms subset neurons closest winning neuron moved towards sample point using vector quantization rule however unlike reported families soms identity neurons moved based proximity proximity finally ttoconrot possesses mutating operations namely conditional rotations respect properties ttoconrot mention following first adaptive regard migration points secondly also adaptive regard identity neurons moved thirdly distribution neurons feature space mimics distribution sample points finally virtue conditional rotations entire tree optimized regard overall accesses unique phenomenon compared reported family soms far know mentioned introductory section general dynamic adaptation som lattices reported literature considers essentially adding cases deleting however concept modifying underlying structure shape unrecorded hypothesis advantageous means repositioning nodes consequent edges seen one performs rotations bst words place emphasis occurs result restructuring representing som case alluded earlier restructuring process done connections neurons attain asymptotically optimal configuration nodes accessed frequently tend placed close root thus obtain new species soms performing rotations conditionally locally constant number steps primary goal field ads structure elements attain optimal configuration number iterations increases particularly among adss use trees underlying topology common goal minimize overall access cost roughly means one places frequently accessed nodes close root also moves towards although adaptation made som paradigm conrot relevant tree structure thus ttosom implies specific must applied achieve integration two paradigms start defining binary search tree som bstsom special instantiation som uses bst underlying topology adaptive bstsom abstsom refinement bstsom training process employs technique automatically modifies configuration tree goal adaptation facilitate enhance search process assertion must viewed perspective som neurons represent areas higher density queried often every abstsom characterized following properties first adaptive virtue bst representation adaptation done means rotations rather merely deleting adding nodes second neural network corresponds bst goal maintains essential stochastic topological properties som neural distance case ttosom neural distance two neurons depends number unweighted connections separate tree consequently number edges shortest path connects two given nodes explicitly distance two nodes tree defined minimum number edges required one case trees fact single path connecting two nodes implies uniqueness shortest path permits efficient calculation distance node traversal algorithm note however case ttosom since tree static distances priori simplifying computational process situation changes tree dynamically modified shall explain implications tree describes som dynamic first siblings given node may change every time instant secondly parents ancestors node consideration could also change every instant importantly structure tree could change implying nodes neighbors time instant may continue neighbors next indeed extreme case node migrated become root fact parent previous time instant irrelevant next course changes entire landscape rendering resultant som unique distinct example clarify consider figure illustrates computation neural distance various scenarios first figure present scenario node accessed observe distances depicted dotted arrows adjacent numeric index specifying current distance node example prior access nodes distance node even though different levels tree reader aware nodes may also involved calculation case node figures show process node queried turn triggers rotation node upwards observe rotation requires local modifications leaving rest tree untouched sake simplicity explicitness unmodified areas tree represented dashed lines finally figure depicts configuration tree rotation performed time instant distance means increased distance unity moreover although node changed position distance remains unmodified clearly original distances necessarily preserved consequence rotation generally speaking four regions tree remain unchanged namely portion tree parent node rotated portion tree rooted right child node rotated portion tree rooted left child node rotated portion tree rooted sibling node rotated even though four regions remain unmodified neural distance regions affected rotation could lead modification distances nodes another consequence operation worth mentioning following distance two given nodes belong unmodified region tree preserved rotation performed proof assertion obvious inasmuch fact remains every path nodes unmodified remains property interesting potential accelerate computation respective neural distances figure example neural distance rotation figure nodes equidistant even though different levels tree figures show process rotating node upwards finally figure depicts state tree rotation equidistant distance increased unity hand although changed position distance remains bubble activity concept closely related neural distance one referred bubble activity boa subset nodes within distance away node currently examined nodes essence migrated toward signal presented network concept valid nns particular ttosom shall consider bubble modified context rotations concept bubble involves consideration quantity radius establishes big boa therefore direct impact number nodes considered boa formally defined node currently examined arbitrary node tree whose nodes note generalizes special case tree simple directed path clarify bubble changes context rotations first describe context tree static presented function ttosom calculate neighborhood see algorithm specifies steps involved calculation subset neurons part neighborhood bmu computation involves collection parameters including current subset neurons proximity neuron examined bmu current radius neighborhood function invoked first time set contains bmu marked current node algorithm ttosom calculate neighborhood input set nodes bubble activity identified far node bubble activity calculated iii current radius bubble activity output set nodes bubble activity method return else child child child ttosom calculate neighborhood child end end parent parent null parent parent ttosom calculate neighborhood parent end end end algorithm recursive call end storing entire set units within radius bmu tree recursively traversed direct topological neighbors current node direction direct parent children every time new neuron identified part neighborhood added recursive call made radius decremented one marking recently added neuron current node question whether neuron part current bubble depends number connections separate nodes rather distance separate networks solution space instance euclidean distance figure depicts boa differs one defined ttosom result applying rotation figure shows boa around node using configuration tree figure rotation takes place boa involves nodes nodes contained bubble subsequently considering radius equal resulting boa contains nodes finally case leads boa includes whole set nodes observe case presented figure corresponds boa around rotation upwards effected configuration tree used figure case radius unity nodes nodes within bubble different corresponding bubble rotation invoked similarly obtain set different analogous case case note coincidentally case radius equal bubbles identical rotation invoke nodes trivially boa invokes entire tree fact ensure algorithm reaches base case figure boa associated ttosom rotation invoked node explained equation describes criteria boa calculated static tree happens result conditional rotations tree dynamically adapted entire phenomenon consequently boa around particular node becomes function time reflect fact equation reformulated discrete time index algorithm obtain boa specific node setting identical algorithm except input tree dynamically changes even though formal notation includes time parameter happens practice latter needed requires history boa nodes storing history boas require maintenance primarily store changes made tree although storing history changes made tree done optimally question explicitly storing entire history boas nodes tree remains open enforcing bst property heuristic requires tree possess bst property let node bst node left subtree key key node right subtree key key satisfy bst property first see tree must general ttosom utilizes arbitrary number children per node one possibility bound value branching factor words tree trained ttosom restricted contain two children per node additionally tree must implicitly involve comparison operator two children discern branches thus perform search process comparison achieved defining unique key must maintained node tree turn allow course severe constraint forced require phenomenon achieving conditional rotations arbitrary trees unsolved research however currently undertaken lexicographical arrangement nodes leads different closely related concept concerns preservation topology som training process configuration tree change tree evolves positioning nodes accessed often closer root ordering hopefully preserved rotations particularly interesting case occurs imposed tree corresponds list neurons tree ttosom trained using tree node two children adaptive process alter original list rotations modify original configuration generating new state nodes might one two children case consequence incorporating enhancements ttosom imply results obtained significantly different shown shown optimal arrangement nodes tree obtained using probabilities accesses probabilities known priori heuristic offers solution involves decision whether perform single rotation towards root happens concept accessed node compatible corresponding bmu defined model neuron may accessed often others techniques take advantage phenomenon inclusion strategies add delete nodes implicitly stores information acquired currently accessed node incrementing counter node distant sense akin concept bmu counter adds delete nodes competitive networks training phase neuron frequent winner gains prominence sense represent points original data set phenomenon registered increasing bmu counter neuron propose training phase verify worth modifying configuration tree moving neuron one level towards root per algorithm consequently explicitly recording relevant role particular node respect nearby neurons achieves performing local movement node direct parent children aware neuron promotion neural promotion process neuron relocated privileged network respect neurons thus neurons born equal importance society neurons determined represent achieved explicit advancement rank position given premise nodes tree adapted way neurons bmus frequently tend move towards root reduction overall wpl obtained consequence promotion properties guarantee som bst tied together symbiotic manner one enhances vice versa adaptation achieved affecting configuration bst task performed every time training step som performed clearly task achieve integration far know aware research deals issue neural promotion thus believe concept pioneering bst som figure depicts main architecture used accomplish transforms structure som modifying configuration bst turn holds structure neurons figure architectural view adaptive som work constitutes first attempt constraint som using bst focus placed nodes sense unique identifiers nodes employed maintain bst structure promote nodes frequently accessed towards root currently examining ways enhance technique improve time required identify bmu well initialization initialization case ttosom accomplished two main steps involve defining initial value neuron connections among initialization codebook vectors performed manner basic ttosom neurons assume starting value arbitrarily instance placing randomly selected input samples hand major enhancement respect basic ttosom lays way neurons linked together basic definition ttosom utilizes connections remain static time beauty arrangement capable reflecting user perspective time describing topology able preserve configuration algorithm reaches convergence inclusion rotations renders dynamic required local information proposed approach codebooks som correspond nodes bst apart information regarding codebooks feature space neuron requires maintenance additional fields achieve adaptation besides node inherits properties bst node thus includes pointer left right children well make implementation easier pointer parent node also contains label able uniquely identify neuron company neurons identification index constitutes lexicographical key used sort nodes tree remains static time proceeds figure depicts fields included neuron som figure fields included som neuron neural state different states neuron may assume lifetime illustrated figure first node created assigned unique identifier rest data fields populated initial values codebook vector assumes starting value feature space pointers configured appropriately link neuron rest neurons tree bst configuration next significant portion algorithm enters main loop training effected training phase involves adjusting codebooks may also trigger optional modules affect neuron bmu identified neuron might assume restructured state means restructuring technique conrot algorithm applied alternatively neuron might ready accept queries part process mapping mode additionally option currently investigating involves case neuron longer necessary may thus eliminated main neural structure refer state deleted state depicted using dashed lines finally foresee alternative state referred frozen state neuron participate training mode although may continue part overall structure figure possible states neuron may assume training step ttoconrot training module ttoconrot responsible determining bmu performing restructuring calculating boa migrating neurons within boa basically done integrate conrot algorithm sequence steps responsible training phase ttosom algorithm describes details integration accomplished line performs first task algorithm involves determining bmu line invokes conrot procedure rationale following sequence steps parameters needed perform conditional rotation specified includes key element queried present context corresponds identity bmu stage algorithm bmu may rotated depending optimizing criterion given equations boa determined restructuring done performed lines algorithm respectively finally lines responsible neural migration oversee movement neurons within boa towards input sample algorithm train input sample signal pointer tree method ttosom find bmu ttosom calculate neighborhood radius update rule end end algorithm alternative restructuring techniques even though explained advantages conrot algorithm architecture proposing allows inclusion alternative restructuring modules conrot potential candidates used perform adaptation ones mentioned section include splay algorithms among others experimental results illustrate capabilities method experiments reported present work primarily focused lower dimensional feature spaces help reader geometrically visualizing results obtained however important remark algorithm also capable solving problems higher dimensions although graphical representation results illustrative know per results obtained ttosom capable inferring distribution structure data however present setting interested investigating effects applying neural rotation part training process render results comparable experiments section use schedule learning rate radius particular refinement parameters done specific data set additionally parameters follow rather slow decrement decay parameters allowing understand prototype vectors moved convergence takes place solving practical problems recommend refinement parameters increase speed convergence process ttoconrot structure learning capabilities shall describe performance ttoconrot data sets dimensions well experiments multidimensional domain specific advantages algorithm various scenarios also highlighted one dimensional objects since entire learning paradigm assumes data model first attempt see philosophy relevant unidimensional object curve really possesses linear topology thus prima facie case tested strength ttoconrot infer properties data sets generated linear functions plane figure shows different snapshots ttoconrot learns data generated curve random initialization used uniformly drawing points unit square observe original data points lie curve aim show algorithm could learn structure data using arbitrary initial values codebook vectors figures depict middle phase training process edges connecting neurons omitted simplicity interesting see hundred training steps original chaotic placement neurons rearranged fall within line described data points final configuration shown figure reader observe convergence achieved neurons placed almost equidistantly along curve even though codebooks sorted increasing numerical order hidden tree root denoted two concentric squares configured way nodes queried frequently tend closer root sense algorithm capturing essence topological properties data set time rearranging internal order neurons according importance terms probabilities access two dimensional data points demonstrate power including ads soms shall consider data sets studied first consider data generated distribution shown figures case initial tree topology unidirectional list although realistically quite inadvisable considering true unknown topology distribution words assume user priori information data distribution thus initialization phase tree employed tree structure respective keys assigned increasing order observe way providing minimal information algorithm root tree marked two concentric squares neuron labeled index figure also regards feature space prototype vectors initially randomly placed first iteration linear topology lost attributable randomness data points prototypes migrated iterations iterations iterations iterations figure tree list topology learns curve sake simplicity edges ommitted reallocated see figures tree modified consequence rotations transformation completely novel field soms finally figure depicts case convergence taken place tree nodes uniformly distributed entire triangular domain bst property still preserved rotations still possible training process continues experiment serves excellent example show differences current method original ttosom algorithm data set similar settings utilized case ttoconrot points effectively represent entire data set however reader must observe provide algorithm particular priori information structure data distribution learned training process shown figure thus specification initial tree topology representing perspective data space required ttosom longer mandatory alternative specification requires number nodes initial tree sufficient second experiment involves gaussian distribution gaussian ellipsoid learned using ttoconrot algorithm convergence entire training execution phase displayed figure experiment considers complete bst depth containing nodes simplicity labels nodes removed figure tree structure generated neurons suggest ellipsoidal structure data distribution experiment good example show nodes close root represent dense areas ellipsoid time node far root tree space occupy regions low density extremes ellipse ttoconrot infers structure without receiving priori information distribution structure experiment shown figures considers data generated irregular shape concave surface case experiments described earlier original tree includes neurons arranged unidirectionally list result training distribution learned iterations iterations iterations iterations figure tree list topology learns triangular distribution nodes accessed frequently moved closer root conditionally bst property also preserved figure tree learns gaussian distribution neurons accessed frequently promoted closer root tree adapted accordingly illustrated figure observe random initialization performed randomly selecting points unit square points thus necessarily fall within concave boundaries although initialization scheme responsible placing codebook vectors outside irregular shape reader observe training steps repositioned inside contour important indicate even though convergence algorithm line connecting two points passes outside overall unknown shape one must take account ttoconrot tree attempts mimic stochastic properties terms access probabilities user desires topological mimicry terms skeletal structure recommend use ttosom instead final distribution points quite amazing iterations iterations iterations iterations figure tree list topology learns different distributions concave object using ttoconrot algorithm set parameters examples three dimensional data points explain results obtained applying algorithm without conrot opt consider objects experiments utilize data generated contour unit sphere also initially involves chain neurons additionally order show power algorithm cases initialize codebooks randomly drawing points unit cube thus initially places points outside sphere figure presents case basic tto algorithm without conrot learns unit sphere without performing conditional rotations illustration presented figure show state neurons first iteration completed shown codebooks lie inside unit cube although neurons positioned outside boundary respective circumscribed sphere one want learn secondly figures depict intermediate steps learning phase algorithm processes information provided sample points neurons repositioned chain neurons constantly twisted adequately represent entire manifold finally figure illustrates case convergence reached case list neurons evenly distributed sphere preserving original properties object also presenting shape reminds viewer peano curve complimentary set experiments involved learning unit sphere tto scheme augmented conditional rotations conrot also conducted figure shows initialization codebooks starting positions neurons fall within unit cube case displayed figure figures show snapshots iterations respectively case tree configuration obtained intermediate phases differ significantly obtained corresponding configurations shown figure involved rotations case list rearranges per conrot modifying original chain structure yield iterations iter iter iter figure tree list topology learns sphere distribution algorithm utilize conditional rotation balanced tree finally results obtained convergence illustrated figure possible compare scenarios cases see tree accurately learned however first case structure nodes maintained list throughout learning phase case conrot applied configuration tree constantly revised promoting neurons queried frequently additionally experiments show dimensionality reduction property evidenced traditional som also present ttoconrot object domain successfully learned algorithm properties original manifold captured perspective tree iterations iter iter iter figure tree list topology learns sphere distribution multidimensional data points well known iris dataset chosen showing power scheme scenario dimensionality increased data set gives measurements centimeters variables sepal length sepal width petal length petal width respectively flowers species iris family species iris setosa versicolor virginica set experiments iris data set learned three different configurations using fixed schedule learning rate radius distinct tree configuration results experiments depicted figure involve complete binary tree depth respectively taking account dataset possesses high dimensionality present projection space facilitate visualization also removed labels nodes figure improve understandability using nodes using nodes using nodes figure three different experiments ttoconrot effectively captures fundamental structure iris dataset projection data shown experiment utilizes underlying tree topology complete binary tree different levels depth attempt show examples exactly parameters ttoconrot utilized learn structure data belonging also spaces executing main branches tree migrated towards center mass cloud points belonging three categories flowers respectively since ttoconrot unsupervised learning algorithm performs learning without knowing true labels samples however labels available one use evaluate quality tree sample assigned closest neuron tagging neuron class frequent table presents evaluation tree figure assigned neuron table cluster class evaluation tree figure using simple voting scheme explained possible see table instances incorrectly classified instances correctly classified additionally observe node contains instances corresponding class well known class linearly separable two classes algorithm able discover without providing labels find result quite fascinating experimental results shown table demonstrate potential capabilities ttoconrot performing clustering also suggest possibilities using pattern classification according several reasons performing pattern classification using unsupervised approach currently investigating classification strategy skeletonization general main objective skeletonization consists generating simpler representation shape object authors refer skeletonization plane process shape transformed one similar stick figure applications skeletonization diverse including fields computer vision pattern recognition explained traditional methods skeletonization assume connectivity data points case sophisticated methods required previous efforts involving som variants achieve skeletonization proposed remark ttosom one uses structure ttosom assumed shape object known priori rather learned accessing single point entire shape time instant results reported confirm actually possible focus conditional rotations affect skeletonization figure shows ttoconrot learned skeleton different objects domain cases schedule parameters used number neurons employed chosen proportionally number data points contained respective data sets important remark invoke edges minimum spanning tree skeleton observed exactly bstsom learned firstly figures illustrate shapes silhouette human rhinoceros representation head representation woman figures also show trees learned respective data sets additionally figures display data points opinion capable representing fundamental structure four objects way effectively final comment stress shapes employed experiments involve learning external structure objects case solid objects internal data points also provided ttoconrot able give approximation representation skeleton built inside solid object theoretical analysis according kiviluoto three different criteria evaluating quality map first criterion indicates continuous mapping implying input signals close input space mapped codebooks close output space well second criterion involves resolution mapping maps high resolution possess additional property input signals distant input space represented distant codebooks output space third criterion imposed accuracy mapping aimed reflect probability distribution input set exist variety measures quantifying quality topology preservation author surveys number relevant measures quality maps include quantization error topographic product topographic error trustworthiness neighborhood preservation although currently investigating quality som scheme quantified using metrics following arguments pertinent figure ttoconrot effectively captures fundamental structure four objects way figures show silhouette human rhinoceros representation head representation woman well respective trees learned figures show respective data points ordering weights respect position neurons som proved unidimensional topologies extending results higher dimensional configurations topologies leads numerous unresolved problems first question one means ordering higher dimensional spaces defined issue absorbing nature ordered state open budinich explains intuitively problems related ordering neurons higher dimensional configurations huang introduce definition ordering show even though position codebook vectors som ordered still possibility sequence stimuli cause disarrangement statistical indexes correlation measures weights distances related positions introduced regard topographic product authors shown power metric applying different artificial data sets also compared different measures quantify topology study concentrates traditional som implying topologies evaluated linear nature consequential extension means grids haykin mention topographic product may employed compare quality different maps even maps possess different dimensionality however also noted measurement possible dimensionality topological structure dimensionality feature space topologies considered study precise effort towards determining concept topology preservation dimensions greater unity specifically focused som define treelike topology measured define order topologies thus believe even tools analyze ttoconrot currently available experimental results obtained paper suggest ttoconrot able train preserve stimuli however order quantify quality topology matter defining concept ordering structure yet resolved although issue great interest rather ambitious task lies beyond scope present manuscript conclusions discussions concluding remarks paper proposed novel integration areas adaptive data structures adss maps soms particular shown som adaptively transformed employment underlying binary search tree bst structure subsequently restructured using rotations performed conditionally rotations nodes tree local done constant time performed decrease weighted path length wpl entire tree one main advantages algorithm user need priori knowledge topology input data set instead proposed method namely ttosom conditional rotations ttoconrot infers topological properties stochastic distribution time attempt build best bst represents data set incorporating data structure constraints ways achieved related approaches included premise regions accessed often promoted preferential spots tree representation yields improved stochastic representation experimental results suggest ttoconrot tree indeed able absorb stochastic properties input manifold also possible obtain tree configuration learn stochastic properties terms access probabilities time preserve topological properties terms skeletal structure discussions future work explained section work associated measuring topology preservation som including proof convergence unidimensional case performed traditional som questions unanswered topology measured defining order topologies thus believe even tools formally analyzing ttoconrot currently available experimental results obtained paper suggest ttoconrot able train neural network preserve stimuli concept ordering structures yet resolved even though principal goal obtain accurate representation stochastic distribution results also suggest special configuration tree obtained ttoconrot exploited improve time required identifying best matching unit bmu includes different strategies expand trees inserting nodes single neuron essentially based quantization error measure strategies error measure based hits number times neuron selected bmu principle type counter utilized conditional rotations conrot strategy ttoconrot asymptotically positions frequently accessed nodes close root might incorporate module taking advantage optimal tree bmu counters already present ttoconrot splits node root level thus splitting operation occur without necessity searching node largest assumption higher number hits indicates degree granularity particular neuron lacking refinement concept using root tree growing som pioneering far know design implementation details currently investigated references landis algorithm organization information sov math dokl akram khalid khan identification classification microaneurysms early detection diabetic retinopathy pattern recognition alahakoon halgamuge srinivasan dynamic maps controlled growth knowledge discovery ieee transactions neural networks allen munro binary search trees acm arsuaga uriarte topology preservation som international journal applied mathematics computer sciences astudillo self organizing maps constrained data structures phd thesis carleton university astudillo oommen using adaptive binary search trees enhance self organizing maps nicholson editors australasian joint conference artificial intelligence pages astudillo oommen imposing topologies onto self organizing maps information sciences astudillo oommen achieving pattern recognition utilizing soms pattern recognition bauer herrmann villmann neural maps topographic vector quantization neural networks bauer pawelzik quantifying neighborhood preservation feature maps neural networks july bitner heuristics dynamically organize data structures siam blackmore visualizing structure incremental grid growing neural network master thesis university texas austin budinich ordering conditions maps neural computation carpenter grossberg art adaptive pattern recognition neural network computer cheetham oommen adaptive structuring binary search trees using conditional rotations ieee trans knowl data conti giovanni mathematical treatment self organization extension classical results artificial neural networks icann international conference volume pages cormen leiserson rivest stein introduction algorithms second edition july datta parui chaudhuri skeletal shape extraction dot patterns selforganization pattern recognition proceedings international conference aug deng image collection summarization comparison using maps pattern recognition dittenbach merkl rauber growing hierarchical map neural networks ijcnn proceedings international joint conference volume pages dopazo carazo phylogenetic reconstruction using unsupervised growing neural network adopts topology phylogenetic tree journal molecular evolution february duda hart stork pattern classification edition fritzke growing cell structures network unsupervised supervised learning neural networks fritzke growing grid network constant neighborhood range adaptation strength neural processing letters fritzke growing neural gas network learns topologies tesauro touretzky leen editors advances neural information processing systems pages cambridge mit press guan trees forests powerful tool pattern clustering recognition image analysis recognition third international conference iciar varzim portugal september proceedings part pages haykin neural networks learning machines prentice hall edition edition huang babri ordering maps cases neural computation kang kim multiple people tracking using competitive condensation pattern recognition kaplan handbook data structures applications chapter persistent data structures pages chapman kaski kangas kohonen bibliography map som papers neural computing surveys khosravi safabakhsh human eye sclera detection tracking using modified timeadaptive map pattern recognition kiviluoto topology preservation maps ieee neural networks council editor proceedings international conference neural networks icnn volume pages new jersey usa knuth art computer programming volume sorting searching addison wesley longman publishing redwood city usa kohonen maps new york secaucus usa koikkalainen oja hierarchical feature maps ijcnn international joint conference neural networks june lai efficient maintenance binary search trees phd thesis university waterloo waterloo canada liang fairhurst guest titlea synthesised word approach word retrieval handwritten documents pattern recognition martinetz schulten network learns topologies proceedings international conference articial neural networks volume pages amsterdam mehlhorn dynamic binary search siam journal computing merkl dittenbach rauber adaptive hierarchical incremental grid growing architecture data visualization proceedings workshop maps advances maps pages miikkulainen script recognition hierarchical feature maps connection science ogniewicz hierarchic voronoi skeletons pattern recognition oja kaski kohonen bibliography map som papers addendum neural computing surveys pakkanen iivarinen oja evolving tree novel network data analysis neural processing letters december peano sur une courbe qui remplit toute une aire plane mathematische annalen honkela kohonen bibliography map som papers addendum technical report helsinki university technology department information computer science espoo finland december survey comparison quality measures maps georg andreas rauber editors proceedings fifth workshop data analysis wda pages sliezsky dom tatry slovakia june elfa academic press rauber merkl dittenbach growing hierarchical map exploratory analysis data ieee transactions neural networks rojas neural networks systematic introduction new york new york usa samsonova kok ijzerman treesom cluster analysis map neural networks advances self organising maps wsom singh cherkassky papanikolopoulos maps skeletonization sparse shapes neural networks ieee transactions jan sleator tarjan binary search trees acm venna kaski neighborhood preservation nonlinear projection methods experimental study georg dorffner horst bischof kurt hornik editors icann volume lecture notes computer science pages springer yao mignotte collet galerne burel unsupervised segmentation using selforganizing map noise model estimation sonar imagery pattern recognition
9
robust satisfaction temporal logic specifications via reinforcement learning oct austin derya zhaodan mac calin consider problem steering system unknown stochastic dynamics satisfy rich temporallylayered task given signal temporal logic formula represent system markov decision process states built partition statespace transition probabilities unknown present provably convergent reinforcement learning algorithms maximize probability satisfying given formula maximize average expected robustness measure strongly formula satisfied demonstrate via pair robot navigation simulation case studies reinforcement learning robustness maximization performs better probability maximization terms probability satisfaction expected robustness ntroduction consider problem controlling system unknown stochastic dynamics black box achieve complex task example controlling noisy aerial vehicle partially known dynamics visit set regions desired order avoiding hazardous areas consider tasks given temporal logic formulae extension first order boolean logic used reason state system evolves time stochastic dynamical model known exist algorithms find control policies maximizing probability achieving given specification planning stochastic abstractions however handful papers considered problem enforcing specifications system unknown dynamics passive active reinforcement learning used find policy maximizes probability satisfying given linear temporal logic formula paper contrast works reinforcement learning use propositional temporal logic use signal temporal logic stl rich predicate logic used describe tasks involving bounds physical parameters time intervals example work partially supported boston university onr grant number nsf grant numbers author mechanical engineering electrical engineering georgia institute technology atlanta usa austinjones authors mechanical engineering boston university boston usa cbelta daksaray author mechanical aerospace engineering university california davis davis usa zdkong author aeronautics astronautics stanford university stanford usa schwager author systems engineering boston university boston usa property within seconds region less reached regions larger avoided stl admits continuous measure called robustness degree quantifies strongly given sample path exhibits stl property real number rather providing yes answer measure enables use continuous optimization methods solve inference formal synthesis problems involving stl one difficulties solving problems formulae satisfaction instance specification requires visiting region region whether system steer towards region depends whether previously visited region linear temporal logic ltl formulae semantics broken translating formula deterministic rabin automaton dra model automatically takes care case stl construction difficult due timebounded semantics circumvent problem defining fragment stl progress towards satisfaction checked finite number state measurements thus define mdp called whose states correspond history system inputs finite collection control actions use reinforcement learning strategy called policy constructed taking actions observing outcomes reinforcing actions improve given reward algorithms either maximize probability satisfying given stl formula maximize expected robustness respect given stl formula procedures provably converge optimal policy case furthermore propose maximizing expected robustness typically effective maximizing probability satisfaction prove certain cases policy maximizes expected robustness also maximizes probability satisfaction however given specification satisfiable probability maximization return arbitrary policy robustness maximization return policy gets close satisfying policy possible finally demonstrate simulation case studies policy maximizes expected robustness cases gives better performance terms probability satisfaction expected robustness fewer training episodes available ignal emporal ogic stl stl defined respect continuously valued signals let denote set mappings define signal member signal denote value time sequence values moreover denote suffix time paper desired mission specification described stl fragment following syntax finite time bound stl formulae constants predicate signal function constant boolean operators negation conjunction respectively boolean operators defined usual temporal operators stand finally eventually globally always respectively note paper use version stl rather typical formulation semantics stl recursively defined iff iff iff iff iff iff plain english means within time units future true means times time units future true means exists time time units future true true stl equipped robustness degree also called degree satisfaction quantifies well given signal satisfies given formula robustness calculated recursively according quantitative semantics min max min supt min inft similar let hrz denote horizon length stl formula horizon length required number samples resolve future past requirements horizon length computed recursively hrz hrz hrz hrz hrz hrz max hrz hrz max hrz hrz max hrz hrz stl formulae example consider robot navigation problem illustrated figure specification visit regions visit regions every time mission horizon let components signal task formulated stl figure shows two trajectories system beginning initial location ending region satisfies inner specification given note barely satisfies slightly penetrates region appears satisfy strongly passes center region center region robustness degrees confirm horizon length inner specification hrz max max max iii odels einforcement earning system unknown stochastic dynamics critical problem synthesize control achieve desired behavior typical approach discretize state action spaces system use reinforcement learning strategy learning take actions trial error interactions unknown environment section present models systems amenable reinforcement learning enforce temporal logic specifications start discussion widely used ltl introducing particular model use reinforcement learning stl max use denote large positive would change large deviation order violate similarly large absolute value negative strongly violates reinforcement learning ltl one approach problem enforcing ltl satisfaction stochastic system partition statespace design control primitives nominally drive system one region another controllers stochastic dynamical model system quotient obtained partition used construct markob decision process mdp called bounded parameter mdp bmdp whose transition probabilities bmdps composed dra constructed given ltl formula form product bmdp dynamic programming applied product mdp generate policy maximizes probability satisfaction approaches problem include aggregating states given quotient mdp constructed transition probability considered constant bounded error optimal policy computed resulting mdp using approximate methods thus even stochastic dynamics system known logic encodes constraints timeabstract semantics problem constructing abstraction system amenable control policy synthesis difficult computationally intensive reinforcement learning methods enforcing ltl constraints make assumption underlying model control mdp implicitly procedures compute frequentist approximation transition probabilities asymptotically approaches true unknown value number observed sample paths increases since algorithm explicitly rely priori knowledge transition probability could applied abstraction system built propositionpreserving partition case uncertainty motion described intervals bmdp reduced via computation would instead described complete ignorance reduced via learning resulting policy would map regions statespace discrete actions optimally drive state system satisfy given ltl specification different partitions result different policies next section extend observation derive discrete model amenable reinforcement learning stl formulae reinforcement learning stl order reduce search space problem partition statespace system form quotient graph set discrete states corresponding regions statespace corresponds set edges edge two states exists neighbors share boundary partition case since stl semantics use automaton acceptance condition dra check satisfaction general whether given trajectory satisfies stl formula would determined directly using qualitative semantics stl fragment consists horizon length hrz modified either temporal operator means order update time whether given formula satisfied violated use previous state values reason choose learn policies mdp finite memory called whose states correspond sequences length regions defined partition example cont let robot evolve according dubins dynamics cos sin coordinates robot time forward speed time interval robot orientation given control primitives case given act right correspond directions grid noisy control primitive induces distribution support orientation robot facing desired cell motion primitive enacted robot rotates angle drawn distribution moves along direction time units partition statespace induced quotient shown figures respectively state quotient figure represents region partition statespace figure point lower left hand corner definition given quotient system finite set actions act decision process tuple act set finite states empty string state corresponds shorter path shorter paths length representing case system yet evolved time steps prepended times act probabilistic transition relation positive first states equal last states exists edge final state final state denote state time definition given trajectory original system define induced trace corresponds previous regions statespace state resided time time construction given quotient set actions straightforward details omitted due length constraints make following key assumptions quotient resulting defined control actions act drive system either point current region point neighboring region partition regions skipped transition relation markovian every state exists continuous set sample paths whose traces could state dynamics underlying system produces unknown distribution since robustness degree function sample paths length stl formula define distribution fig example robot navigation problem partitioned space subsection quotient example cont figure shows portion constructed figure states labeled corresponding sample paths length green blue states correspond green blue regions figure roblem ormulation paper address following two problems problem maximizing probability satisfaction let described previous section given stl formula syntax find policy act arg max problem maximizing average robustness let defined problem given stl formula syntax act find policy act arg max act furthermore probability satisfaction satisfiable arbitrary policy could solution problem policies result satisfaction probability unsatisfiable problem yields solution attempts get close possible satisfying formula optimal solution average robustness value least negative forms objective functions differ two different types formula case consider stl formula case objective function rewritten objective function rewritten max case consider stl formula objective function rewritten objective function rewritten min fig part constructed robot navigation mdp shown figure problems two alternate solutions enforce given stl specification policy found problem maximizes chance satisfied drives system policy found problem satisfy strongly possible average problems similar already considered literature however problem novel formulation provides advantages problem show achieves section special systems aximizing xpected robustness aximizing robability atisfaction demonstrate solution subsumes solution certain class systems due space limitations consider formulae type let act simplicity make following assumption assumption every state either every trajectory whose trace satisfies denoted every trajectory passes sequence regions associated satisfy denoted assumption enforced practice partitioning define set definition signed graph distance set min min length shortest path also make following two assumptions assumption signal let bounded rmin rmax assumption let two states rmin rmax define policies arg max act arg max act max proposition assumptions hold maximizes expected probability satisfacpolicy tion proof given policy associated reachability probability defined arg min let indicator function true false definition expected probability satisfaction given policy eps also expected robustness policy becomes max max max rmin max max rmin rmin rmax rmin rmin rmax rmin rmin since rmin constant maximizing equivalent max let satisfaction probability rewrite objective arg min rmax arg min arg min rmax arg min rmin thus policy increasing also leads increase since increasing equivalent increasing conclude policy maximizes robustness also achieves maximum satisfaction probability ontrol ynthesis aximize robustness policy generation since know dynamics system control priori predict given control action affect evolution system hence progress towards given specification thus use paradigm reinforcement learning learn policies solve problems reinforcement learning system takes actions records rewards associated pair rewards used update feedback policy maximizes expected gathered reward cases rewards collect related whether satisfied problem robustly problem solutions problems rely formulation let reward collected action act taken state define function act max max optimization problem cumulative objective function form optimal policy act found arg maxq applying update rule convergence batch given formula form objective maximizing expected robustness problem show applying algorithm converges optimal solution three cases discussed section proven similarly following analysis based optimal function derived max cause converges goes infinity batch reformulate problems form see section thus propose alternate formulation called batch solve problems instead updating action taken wait entire episode completed updating batch procedure summarized algorithm algorithm batch learning algorithm function batchqlearn sys probtype nep randominitialization initializepolicy nep sys updateqfunction probtype updatepolicy return algorithm function used update function used algorithm function updateqfunction probtype probtype maximumprobability qtmp max else qtmp max qnew qtmp return qnew function initialized random values computed initial values nep episodes system simulated using randomization used encourage exploration policy space observed trajectory used update function according algorithm new value function used update policy compactness algorithm written covers case case addressed similarly max max gives following convergence result proposition rule given max max converges optimal function sequence proof sketch proof proposition relies primarily proposition established rest proof varies slightly presentation note case ranges number episodes ranges time coordinate signal proposition optimal given fixed point contraction mapping max max proof contraction mapping fixed point consider max max max max define max wolog let define max max exist possibilities value trained performance trained performance trained performance trained performance robustness count count count robustness robustness thus means hence therefore contraction mapping vii ase tudy implemented learning algorithm algorithm applied two case studies adapt robot navigation model example case study solved problems compared performance resulting policies simulations implemented matlab performed ghz processor ram case study reachability first consider simple reachability problem given stl specification fig comparison policies case study histogram robustness values trained policies solution problem problem trajectory generated policies solution problem problem example trajectory example trajectory robustness example trajectory example trajectory count fig comparison policies case study subplots meaning figure generated system simulated using trained policies learning completed without randomization used learning phase note trained policies satisfied specification probability performance two algorithms similar mean robustness standard deviation probability maximization robustness maximization second row see trajectories simulated trained policies similarity solutions case study surprising state system deep within probability remain inside region next time steps satisfy higher edge region trajectories remain deeper interior region also high robustness value thus particular problem inherent coupling policies satisfy formula high probability satisfy formula robustly possible average case study repeated satisfaction stl subformula corresponding blue region plain english stated within time units reach blue region revisit blue region time results applying algorithm summarized figure used parameters nep probability iteration selecting action random constructing took algorithm took solve problem solve problem two approaches perform similarly first row show histogram robustness trials although conditions technically required prove convergence practice conditions relaxed without adverse effects learning performance second case study look problem involving repeatedly satisfying condition finitely many times specification interest plain english ensure every time units unit interval green region blue region results case study shown figure used parameters listed section except constructing took applying algorithm took problem problem first row see solution problem satisfies formula probability solution problem satisfies formula probability first seems counterintuitive proposition indicates policy maximizes probability would achieve probability satisfaction least high policy maximizes expected robustness however guaranteed infinite number learning trials performance terms robustness obviously better robustness maximization mean standard deviation probability maximization mean standard deviation second row see maximum robustness policy enforces convergence cycle two regions maximum probability policy deviates cycle discrepancy two solutions explained happens trajectories almost satisfy occur trajectory almost oscillates blue green region every four seconds encountered solving problem collects reward hand solving problem policy produces almost oscillatory trajectory reinforced much strongly resulting robustness less negative however since robustness degree gives partial credit trajectories close satisfying policy reinforcement learning algorithm performs directed search find policies satisfy formula since probability maximization gives partial credit reinforcement learning algorithm essentially performing random search encounters trajectory satisfies given formula therefore family policies satisfy formula positive probability small average take algorithm solving problem longer time converge solution enforces formula satisfaction viii onclusions uture ork paper presented new reinforcement learning paradigm enforce temporal logic specifications dynamics system priori unknown contrast existing works topic use logic signal temporal logic whose formulation directly related system statespace present novel convergent algorithm uses robustness degree continuous measure well trajectory satisfies formula enforce given specification certain cases robustness maximization subsumes established paradigm probability maximization certain cases robustness maximization performs better terms probability robustness partial training future research includes formally connecting approach abstractions linear stochastic systems eferences abate innocenzo benedetto approximate abstractions stochastic hybrid systems automatic control ieee transactions nov baier katoen principles model checking volume mit press cambridge brazdil chatterjee chmelik forejt kretinsky kwiatkowska parker ujma verification markov decision processes using learning algorithms cassez raskin editors automated technology verification analysis volume lecture notes computer science pages springer international publishing ding smith belta rus optimal control markov decision processes linear temporal logic constraints ieee transactions automatic control ding wang lahijanian paschalidis belta temporal logic motion control using methods robotics automation icra ieee international conference pages may dokhanchi hoxha fainekos monitoring temporal logic robustness runtime verification pages springer maler robust satisfaction temporal logic signals formal modeling analysis timed systems pages fainekos pappas robustness temporal logic specifications signals theoretical computer science topcu probably approximately correct mdp learning control temporal logic constraints corr jin donze deshmukh seshia mining requirements control models proceedings international conference hybrid systems computation contro pages jones kong belta anomaly detection systems formal methods approach ieee conference decision control cdc pages julius pappas approximations stochastic hybrid systems automatic control ieee transactions june kamgarpour ding summers abate lygeros tomlin discrete time stochastic hybrid dynamic games verification controller synthesis proceedings ieee conference decision control european control conference pages kong jones medina ayala aydin gol belta temporal logic inference classification prediction data proceedings international conference hybrid systems computation control pages acm lahijanian andersson belta temporal logic motion planning control probabilistic satisfaction guarantees robotics ieee transactions april lahijanian andersson belta approximate markovian abstractions linear stochastic systems proc ieee conference decision control pages maui usa lahijanian andersson belta formal verification synthesis stochastic systems ieee transactions automatic control luna lahijanian moll kavraki asymptotically optimal stochastic motion planning temporal goals workshop algorithmic foundations robotics istanbul turkey melo convergence simple proof http raman donze maasoumy murray sangiovannivincentelli seshia model predictive control signal temporal logic specifications proceedings ieee conference decision control cdc pages sadigh kim coogan sastry seshia learning based approach control synthesis markov decision processes linear temporal logic specifications corr sutton barto reinforcement learning introduction volume mit press cambridge svorenova chmelik chatterjee belta temporal logic control stochastic linear systems using abstraction refinement probabilistic games hybrid systems computation control hscc volume appear tsitsiklis asynchronous stochastic approximation qlearning machine learning
3
batched svd algorithms gpus applications hierarchical matrix compression jul wajih halim george hatem david abstract present high performance implementations singular value decomposition batch small matrices hosted gpu applications compression hierarchical matrices jacobi algorithm used simplicity inherent parallelism building block svd low rank blocks using randomized methods implement multiple kernels based level gpu memory hierarchy matrices reside show substantial speedups streamed cusolver svds resulting batched routine key component hierarchical matrix compression opening opportunities perform arithmetic efficiently gpus introduction singular value decomposition svd factorization general matrix form orthonormal matrix whose columns called left singular vectors diagonal matrix whose diagonal entries called singular values sorted decreasing order orthonormal matrix whose columns called right singular vectors compute reduced form matrix diagonal matrix one easily obtain full form reduced one extending orthogonal vectors zero block row without loss generality focus reduced svd real matrices discussions svd matrix crucial component many applications signal processing statistics well matrix compression truncating singular values smaller threshold gives approximation matrix matrix unique minimizer function context hierarchical matrix operations effective compression relies ability perform computation large batches independent svds small matrices low numerical rank randomized methods well suited computing truncated svd types matrices built three computational kernels factorization multiplications svds smaller matrices motivated task discuss implementation high performance batched svd kernels gpu focusing challenging svd tasks remainder paper organized follows section presents different algorithms used compute factorization svd well considerations optimizing extreme computing research center ecrc king abdullah university science technology kaust thuwal saudi arabia department computer science american university beirut aub beirut lebanon addresses batched svd algorithms algorithm householder procedure house gpus section discusses batched factorization compares performance existing libraries sections discuss various implementations svd based level memory hierarchy matrices reside specifically section describes implementation small matrix sizes fit registers section describes implementation matrices reside shared memory section describes block jacobi implementation larger matrix sizes must reside global memory section details implementation batched randomized svd routine discuss details application hierarchical matrix compression section conclude discuss future work section background section give review common algorithms used compute factorization svd matrix well discuss considerations optimizing gpu factorization factorization decomposes matrix product orthogonal matrix upper triangular matrix also compute reduced form decomposition matrix upper triangular common algorithm based transforming upper triangular matrix using series orthogonal transformations generated using householder reflectors algorithms modified produce factorization orthogonalizing column previous columns however methods less stable householder orthogonalization orthogonality resulting factor suffers condition number matrix another method based givens rotations entries subdiagonal part matrix zeroed form triangular factor rotations accumulated form orthogonal factor method stable parallelism householder method however expensive work challenging extract parallelism efficiently gpu implementation rely householder method due numerical stability simplicity method described algorithm svd algorithms implementations svd based approach popularized trefethen matrix first undergoes bidiagonalization form bqtv orthonormal matrices bidiagonal matrix matrix diagonalized using variant algorithm divide conquer method combination produce decomposition complete svd determined batched svd algorithms backward transformation methods require significant algorithmic programming effort become robust efficient still suffering loss relative accuracy alternative jacobi method pairs columns repeatedly orthogonalized sweeps using plane rotations columns mutually orthogonal process converges columns mutually orthogonal machine precision left singular vectors normalized columns modified matrix singular values norms columns right singular vectors computed either accumulating rotations solving system equations application need right vectors omit details computing algorithm describes jacobi method since pair columns orthogonalized independently method also easily parallelized simplicity inherent parallelism method make attractive first choice implementation gpu gpu optimization considerations gpu kernels launched specifying grid configuration lets organize threads blocks blocks grid launching gpu kernel causes short stall much microseconds kernel prepared execution kernel launch overhead prevents kernels complete work faster overhead executing parallel essentially serializing overcome limitation processing small workloads work batched single kernel call possible operations executed parallel without incurring kernel launch overhead grid configuration used determine thread work assignment warp group threads threads current generation gpus nvidia within block executes single instruction lockstep without requiring explicit synchronization occupancy kernel tells ratio active warps maximum number warps multiprocessor host metric dependent amount resources kernel uses register shared memory usage kernel launch configuration well compute capability card details requirement good performance generally good idea aim high occupancy memory gpu organized hierarchy memory spaces shown figure bottom global memory accessible threads plentiful slowest memory next space interest shared memory accessible threads within block configurable cache per thread block current generation gpus shared memory fast acts programmer controllable cache finally registers local threads registers fastest memory total number registers usable thread without performance implications limited kernel needs registers limit registers spilled local memory slow cached global memory making good use faster memories avoiding excessive algorithm jacobi svd converged pair columns aij atij aij rot aij aij batched svd algorithms registers shared memory cache cache cache global memory figure memory hierarchy modern gpu accesses slower ones key good performance gpu common use blocking techniques many algorithms block data brought global memory processed one faster memories related work batched gpu routines cholesky factorizations developed using block recursive approach increases data reuse leads good performance relatively large matrix sizes gpu routines optimized computing decomposition tall skinny matrices presented develop efficient transpose computation employed minor changes work hybrid algorithms batched svd using jacobi bidiagonalization methods introduced pair generation jacobi method solver phase bidiagonalization handled cpu work employs power method construct rank approximation filters convolutional neural networks routines handle svd many matrices gpus presented thread within warp computes svd single matrix batched decomposition section discuss implementation details batched kernel compare implementations magma cublas libraries implementation one benefit householder algorithm application reflectors trailing matrix line algorithm blocked together expressed multiplication level blas instead multiple multiplications level blas increased arithmetic intensity typically allows performance improve trailing matrix large however small matrix blocks overhead generating blocked reflectors vector form well lower performance multiplication small matrices hinder performance obtain better performance applying multiple reflectors vector form performing transpose multiplication efficiently within thread block first perform regular factorization column block called panel entire panel stored registers thread storing one row panel transpose product computed using series reductions using shared memory warp shuffles batched svd algorithms registers shu exor lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane warp figure left matrix rows allocated thread registers warp right parallel warp reduction using shuffles within registers allow threads within warp read registers figure shows data layout theoretical warp size columns registers warp reduction using shuffles factor panel apply reflectors trailing separate kernel optimized performing core product update second kernel load factored panel panel trailing registers apply reflectors one time updating trailing panel registers let take example trailing panel reflector compute product mit flattening product reduction vector shared memory padded avoid bank conflicts reduction serialized reaches size partial reduction vector size take place steps final vector product mit quickly applied registers storing process repeated trailing panel within kernel maximize use reflectors stored registers figure shows one step panel factorization application reflectors trailing submatrix since threads limited per block current architectures use approach developed factorize larger matrices first factorize panels thread block limit single kernel call panels first factorized first loading triangular factor shared memory proceeding panel factorization taking triangular portion consideration computing reflectors updates keep occupancy small matrices devices resident block limit could reached thread limit assign multiple operations single thread block batch matrices dimensions kernels launched using thread blocks size thread block handles operations performance figures show performance batched square rectangular matrices panel width tuned gpu compare vendor implementation cublas well high performance library magma see proposed version performs well rectangular matrices column size starts losing ground magma larger square matrix sizes blocked algorithm starts batched svd algorithms figure one step factorization panel factored produce triangular factor reflectors used update trailing submatrix magma cublas magma cublas magma cublas magma cublas matrix size batched kernel performance square matrices matrix rows batched kernel performance rectangular matrices fixed column size figure comparing batched kernels matrices varying size gpu single double precision show performance benefits nested implementation kernel used factor relatively large panels blocked algorithm likely show additional performance improvements large square matrices leave future work register memory jacobi section discuss first batched svd kernel matrix data hosted registers analyze performance resulting kernel implementation implementation avoid repeated global memory accesses attempt fit matrix register memory using layout panel factorization one row per batched svd algorithms performance performance occupancy occupancy occupancy matrix size kernel performance achieved occupancy matrix size effect increasing matrix size occupancy register kernel figure performance batched register memory svd gpu matrices varying size single double precision arithmetics thread however number registers thread uses impact occupancy potentially lead lower performance addition register count exceeds limit set gpu compute capability registers spill local memory resides cached slow global memory since store entire matrix row registers one thread use serial jacobi algorithm compute svd column pairs processed threads one time bulk work lies computation gram matrix atij aij line algorithm update columns line since gram matrix symmetric boils three dot products executed parallel reductions within warp using warp shuffles computation rotation matrix well convergence test performed redundantly thread finally column update done parallel thread register data kernel keep occupancy smaller matrix sizes assigning multiple svd operations single block threads operation assigned warp avoid unnecessary synchronizations performance generate batches test matrices varying condition numbers using latms lapack routine calculate performance based total number rotations needed convergence figures show performance gpu batched svd kernel effect increased register usage occupancy profiling kernel see gram matrix computation takes cycles column rotations take cycles redundantly computed convergence test rotation matrices dominate cycles fact redundant portion computation dominates means preferable assign threads possible processing column pairs due low occupancy larger matrix sizes register spills local memory matrices larger obvious register approach suffice larger matrix sizes leads next implementation based slower shared memory warp warp step warp batched svd algorithms warp step step step step step step figure distribution column pairs warps step sweep shared memory jacobi register based svd performs well small matrix sizes need kernel handle larger sizes maintain reasonably high occupancy leads building kernel based shared memory next level gpu memory hierarchy section discusses implementation details kernel analyze performance compared register kernel implementation version matrix stored entirely shared memory limited per thread block current generation gpus using thread assignment register based kernel would lead poor occupancy due high shared memory consumption potentially warps active multiprocessor instead exploit inherent parallelism jacobi assign warp pair columns warps processing matrix stored shared memory total pairs columns must generate pairings steps step processing pairs parallel many ways generating pairs including round robin ring ordering implement round robin ordering using shared memory keep track column indexes pairs first warp block responsible updating index list step figure shows ordering matrix columns number matrix rows exceeds size warp assignment longer allows use fast warp reductions would force use even resources reductions would done shared memory instead assign multiple rows thread serializing portion reduction rows warp reductions used follows observation section assign threads possible process column pairs frees valuable resources increases overall performance reduction row padding used keep rows multiples warp size column padding used keep number columns even kernels launched using threads process matrix figures show examples thread allocation reductions matrix using theoretical warp size batched svd algorithms shared memory serial reduction shufflexor lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane warp warp warp warp matrix columns assigned pairs multiple warps stored shared memory lane parallel reduction column data shared memory using register shuffles initial serial reduction step figure shared memory kernel implementation details performance figures show performance parallel shared svd kernel compared serial register svd kernel gpu see improved growth performance shared memory kernel due greater occupancy well absence local memory transactions looking double precision occupancy notice two dips occupancy matrix sizes number resident blocks become limited limits device dropping resident blocks performance increases steadily increase number threads assigned operation reach matrix size reach block limit threads handle larger sizes must use blocked version algorithm randomized svd see sections respectively global memory block jacobi longer store entire matrix shared memory operate matrix slower global memory instead repeatedly reading updating columns one time block algorithms facilitate cache reuse developed main benefit block jacobi algorithm high degree parallelism however since implement batched routine independent operations use serial block jacobi algorithm individual matrices rely parallelism batch processing parallel version multiple blocks processed simultaneously still used batch size small focus serial version section discuss implementation details two global memory block jacobi algorithms differ way block columns orthogonalized compare performance parallel streamed calls cusolver library routines gram matrix block jacobi svd block jacobi algorithm similar vector algorithm orthogonalizing pairs blocks columns instead vectors first method orthogonalizing pairs block columns based svd gram matrix sweep pair block columns batched svd algorithms reg occupancy reg occupancy register kernel smem kernel register kernel smem kernel occupancy smem occupancy smem occupancy matrix size shared memory kernel performance compared register kernel matrix size comparison occupancy achieved register shared memory kernels figure performance batched shared memory svd gpu matrices varying size single double precision arithmetics aij singular vectors gij updating apij uij orthogonalized forming gram matrix gij aij generating block rotation matrix uij computed left equivalently eigenvectors since symmetric positive definite orthogonalizes block columns since uij apij apij uij uij gij uij diagonal matrix singular values gij orthogonalizing pairs block columns entire matrix orthogonal give left singular vectors normalized columns singular values corresponding column norms right singular vectors needed accumulate action block rotation matrices identity matrix batched implementation use highly optimized batched syrk gemm routines magma compute apply block rotations svd computed shared memory batched kernel since different matrices converge different numbers sweeps keep track convergence operation computing norm entries scaled diagonal entries term inexact approximation terms full matrix sweep still good indication convergence cost extra cheap sweep since final sweep actually perform rotations within svd entire batched operation converge max convergence tolerance gives gram matrix path batched block jacobi algorithm compute svd batch matrices global memory worth noting computation gram matrix optimized taking advantage special structure since bulk computation svd result significant performance gains direct block jacobi svd gram matrix method indirect way orthogonalizing block columns may fail converge matrix matrices handled directly batched svd algorithms algorithm batched block jacobi svd pair block columns aij method gram batchsyrk aij else aij batchqr aij max scaledoffdiag batchsvd aij batchgemm aij max orthogonalizing columns using svd since block columns rectangular first compute decomposition followed svd triangular factor overwriting block column apij orthogonal factor multiplying left singular vectors scaled singular values give new block column apij qpij rij qpij uij vijp vij right singular vectors needed accumulate action vijp identity matrix batched implementation use batch routine developed section gemm routines magma multiply orthogonal factor left singular vectors svd computed shared memory batched kernel convergence test used gram matrix method used triangular factor since triangular factor close diagonal matrix pair block columns orthogonal gives direct path batched block jacobi algorithm compute svd batch matrices global memory performance figures show profiling different computational kernels involved batched block algorithms block width specifically percentages total execution time determining convergence memory operations matrix multiplications decompositions svd gram matrix gram matrix approach svd costly phase even larger operations svd decompositions take almost time larger matrices direct approach figure shows performance batched block jacobi svd matrices using methods figure compares performance batched svd routine batched routine uses cusolver svd routine using concurrent streams gpu increasing number streams cusolver showed little performance benefits highlighting performance limitations routines bound kernel launch overhead matrices generated randomly using latms lapack routine condition number gram matrix approach fails converge single precision types matrices whereas direct approach always converges however gram matrix approach performs better applicable larger matrices due strong performance multiplcations performance block algorithm improved preprocessing matrix using decompositions decrease number sweeps required convergence well adaptively selecting pairs block columns based batched svd algorithms misc gemm svd misc gemm svd total time total time computed offdiagonal norms gram matrices changes beyond scope paper focus future work matrix size matrix size gram matrix batched block jacobi svd profile direct batched block jacobi svd profile figure profile different phases block jacobi svd matrices varying size gpu double precision single precision exhibits similar behavior randomized svd mentioned section often interested approximation matrix compute approximation first determining singular value decomposition full matrix truncating smallest singular values corresponding singular vectors however matrix low numerical rank obtain approximation using fast randomization methods section discuss details gram direct direct time streamed cusolver streamed cusolver batched direct batched direct batched gram matrix size batched block jacobi svd performance matrix size comparison streamed cusolver batched block jacobi figure batched block jacobi performance matrices varying size gpu single double precision arithmetics batched svd algorithms algorithm batched randomized svd procedure rsvd size rand batchgemm batchqr batchgemm batchqr batchsvd batchgemm batchgemm algorithm compare performance full svd using block jacobi kernel implementation singular values matrix decay rapidly compute approximate svd using simple two phase randomization method first phase determines approximate orthogonal basis columns ensuring qqt numerical rank low sure small number columns well see drawing sample vectors random input vectors obtain reliable approximate basis orthogonalized boils computing matrix random gaussian sampling matrix computing decomposition qry desired approximate orthogonal basis second phase uses fact qqt compute matrix forming svd finalize approximation qub wide matrix first compute decomposition transpose followed svd upper triangular factor algorithm shows core computations randomized method multiplications decompositions singular value decompositions small matrices using batched routines previous sections straightforward form required randomized batched svd robust randomized svd algorithms would employ randomized subspace iteration methods obtain better basis columns rely core kernels discussed performance figure shows profiling different kernels used randomized batched routine determining top singular values vectors randomly generated low rank matrices using latms lapack routine miscellaneous portion includes random number generation using curand library default random number generator gaussian distribution batched transpose operations memory operations see performance kernels play almost equally important roles performance randomized routine matrix size grows keeping computed rank figure shows performance batched batched svd algorithms randomized svd operations figure compares runtimes direct block onesided jacobi routine randomized svd gpu set matrices showing significant time savings achieved even relatively small blocks total time misc gemm svd matrix size figure profile different phases batched randomized svd matrices varying size gpu double precision single precision exhibits similar behavior application hierarchical matrix compression application batched kernels presented consider problem hierarchical matrices problem significant importance building hierarchical matrix algorithms fact primary motivation development batched kernels hierarchical matrices received substantial attention recent years ability store perform algebraic operations near linear complexity rather regular dense matrices require effectiveness hierarchical matrices comes randomized svd randomized svd time randomized svd direct block svd randomized svd direct block svd matrix size batched randomized svd performance matrix size comparison batched block jacobi batched randomized svd figure batched randomized svd performance matrices varying size gpu single double precision first singular values vectors batched svd algorithms basis tree leaf nodes stored explicitly whereas inner nodes represented implicitly using transfer matrices leaves matrix tree simple hierarchical matrix red blocks represent dense leaves green blocks low rank leaves figure basis tree matrix tree leaves simple fact approximate matrix quad blocks many blocks regions rapidly decaying spectrum therefore numerically low rank representations low rank representations different levels hierarchical tree reduce memory footprint operations complexity associated matrix algorithms hackbush shows many large dense matrices appear scientific computing discretization integral operators schur complements discretized pde operators covariance matrices well approximated hierarchical representations reviewing analyzing hierarchical matrix algorithms beyond scope paper focus narrow task compressing hierarchical matrices compression task may viewed generalization compression low rank approximation large dense matrices case hierarchical matrices large dense matrices one way perform compression generate single exact approximate svd truncate spectrum desired tolerance produce truncated compressed representation hierarchical matrices equivalent operations involve batched svds small blocks one batched kernel call per level tree hierarchical representation size batch every call number nodes corresponding level tree compression algorithms controllable accuracy important practically often case hierarchical matrices generated analytical methods compressed significant loss accuracy even importantly performing matrix operations additiona multiplication apparent ranks blocks often grow recompressed regularly operations prevent superlinear growth memory requirements representation application use memory efficient variant hierarchical matrices exhibit linear complexity time space many core operations format hierarchical matrix actually represented three trees batched svd algorithms row column basis column trees organize row column indices matrix hierarchically node represents set basis vectors row column spaces blocks nodes leaves tree store vectors explicitly inner nodes store transfer matrices allow implicitly represent basis vectors terms children basis tree relationship nodes called nested basis example binary row basis tree transfer matrices explicitly compute basis vectors node children level figure shows example binary basis tree matrix tree hierarchical blocking formed dual traversal nodes two basis trees leaf determined block either small enough stored dense matrix low rank approximation block meets specified accuracy tolerance latter case node stored coupling matrix level tree rank level block ats matrix index set node row basis tree index set node column basis approximated ats sts vst figure shows leaves matrix quadtree simple hierarchical matrix case symmetric matrices trees identical numerical results symmetric covariance matrix compression compression symmetric represented two trees transfer transfer matrices matrices involves generating new optimal basis tree truncation phase new expresses contents matrix blocks new basis projection phase present version truncation algorithm generates memory efficient basis representation matrix given basis sophisticated algebraic compression algorithms involve use truncation phase order generate efficient basis subject future work truncation phase computes svd nodes basis tree level level explicit nodes level processed parallel produce new basis representation basis vectors leaves compute svd leaf nodes parallel batched kernels truncate singular vectors whose singular values lower relative compression threshold truncating node relative threshold using svd give approximation leaf new leaf nodes leaf level compute projection matrices tree node tid sweeping tree process inner nodes preserving nested basis property using relationship node children level forming matrices using batched multiplication compute svd qsw using batched svd kernel truncate leaves form batched svd algorithms truncated matrices sei block rows new transfer matrices level compressed nested basis projection matrices level key computations involved truncation phase consist one batched svd involving leaves tree followed sequence batched svds one per level tree involving transfer matrices data lower levels projection phase consists transforming coupling matrices matrix tree using generated projection matrices truncation phase coupling matrix sts compute new coupling matrix sets sts tst using batched multiplications phase operation consumes much less time truncation phase gpus substantial efficiencies executing regular arithmetically intensive operations results illustration effectiveness algebraic compression procedure generate covariance matrices various sizes spatial gaussian process observation points placed random perturbation regular discretization unit square isotropic exponential kernel correlation length hierarchical representations formally dense covariance matrices formed analytically first clustering points using mean split giving hierarchical index sets basis tree basis vectors transfer nodes generated using chebyshev interpolation matrix tree constructed using dual traversal basis tree coupling matrices generated evaluating kernel interpolation points approximation error constructed matrix controlled varying number interpolation points varying leaf admissibility condition dual tree traversal approximation error used following tests used maintain accuracy relative truncation error compressed matrices figure shows memory consumption compression hierachical covariance matrices leaf size initial rank corresponding chebyshev grid dense part remains untouched low rank part representation sees substantial decrease memory consumption compression minimal loss accuracy figure shows expected asymptotic linear growth time compression algorithm shows effect using randomized svd samples instead full svd computed shared memory kernel figure shows another example admissibility condition weakened generate coarser matrix tree increased rank corresponding chebyshev grid randomized svd samples also reduces compression time compared full svd using direct block jacobi kernels conclusions future work paper described implementation efficient batched kernels decomposition randomized singular value decomposition low rank matrices hosted gpu batched kernel provides significant performance improvements small matrices existing state art libraries batched svd routines first kind gpu performance exceeding batch matrices size batched svd algorithms dense portion original low rank compressed low rank dense portion original low rank compressed low rank full svd full svd compression time memory consumption randomized svd randomized svd problem size memory savings problem size compression time using randomized svd samples full svd using shared memory kernel figure compression results sample covariance matrices generated spatial statistics gpu single double precision using relative frobenius norm threshold initial rank full svd full svd randomized svd randomized svd compression time problem size figure compression time coarser matrix tree initial rank comparing randomized svd samples full svd precision illustrated power kernels problem involving algebraic compression hierarchical matrices stored entirely gpu memory demonstrated compression algorithm yielding significant memory savings practical problems future plan investigate alternatives jacobi algorithm svd small blocks randomized algorithm improve performance blocked algorithms using preconditioning adaptive block column pair selection also plan develop suite hierarchical matrix operations suited execution modern gpu manycore architectures batched svd algorithms acknowledgments thank nvidia corporation providing access gpu used work references halko martinsson tropp finding structure randomness probabilistic algorithms constructing approximate matrix decompositions siam review vol golub van loan matrix computations johns hopkins university press trefethen bau numerical linear algebra society industrial applied mathematics demmel veselic jacobi method accurate siam journal matrix analysis applications vol haidar dong tomov luszczek dongarra framework batched factorization algorithms applied block householder isc ser lecture notes computer science kunkel ludwig vol springer haidar dong luszczek tomov dongarra optimization performance energy batched matrix computations gpus proceedings workshop general purpose processing using gpus ser new york usa acm wilt cuda handbook comprehensive guide gpu programming pearson education volkov better performance lower occupancy proceedings gpu technology conference gtc vol charara keyes ltaief batched triangular dense linear algebra kernels small matrix sizes gpus submitted acm transactions mathematical software online available http anderson ballard demmel keutzer decomposition gpus parallel distributed processing symposium ipdps ieee international may kotas barhen singular value decomposition utilizing parallel algorithms graphical processors oceans kona sept kang lee improving performance convolutional neural networks separable filters gpu berlin heidelberg springer berlin heidelberg badolato paula farias many svds gpu image mosaic assemble international symposium computer architecture high performance computing workshop oct tomov nath ltaief dongarra dense linear algebra solvers multicore gpu accelerators proc ieee ipdps atlanta ieee computer society april doi nvidia cublas library user guide http nvidia online available http cheng grossman mckercher professional cuda programming ser wiley kurzak ltaief dongarra badia scheduling dense linear algebra operations multicore processors concurrency computation practice experience vol online available http zhou brent parallel implementation jacobi algorithm singular value decompositions parallel distributed processing proceedings euromicro workshop jan zhou brent parallel ring ordering algorithm efficient jacobi svd computations journal parallel distributed computing vol svd algorithms distributed memory systems hypercubes rings parallel algorithms applications vol svd algorithms distributed memory systems meshes parallel algorithms applications vol new dynamic orderings parallel svd algorithm parallel processing letters vol nvidia cusolver library user guide http nvidia online available http batched svd algorithms efficient parallel svd algorithm parallel vol online available http hackbusch khoromskij sparse arithmetic part application problems computing vol hackbusch khoromskij sauter lectures applied mathematics bungartz hoppe zenger eds springer berlin heidelberg hackbusch sparse matrix arithmetic based part introduction computing vol hierarchical matrices algorithms analysis ser springer series computational mathematics berlin springer vol garcke approximating gaussian processes european conference machine learning springer grasedyck hackbusch construction arithmetics computing vol
8
analytical simplified models dynamic analysis short skew bridges moving loads feb nguyena goicoleaa group computational mechanic school civil engineering upm spain abstract skew bridges common highways railway lines non perpendicular crossings encountered structural effect skewness additional torsion bridge deck may considerable effect making analysis design complex paper analytical model following beam theory firstly derived order evaluate dynamic response skew bridges moving loads following simplified model also considered includes vertical beam bending natural frequencies eigenmodes orthogonality relationships determined boundary conditions dynamic response determined time domain using exact integration models validated numerical examples comparing results obtained models parametric study performed simplified model order identify parameters significantly influence vertical dynamic response skew bridge traffic loads results show grade skewness important influence vertical displacement hardly vertical acceleration bridge torsional stiffness really effect vertical displacement skew angle large span length reduces skewness effect dynamic behavior skew bridge keywords skew bridge bridge modelling modal analysis moving load corresponding author email addresses khanh nguyen goicolea preprint submitted engineering structures february introduction skew bridges common highways railway lines non perpendicular crossings encountered structural effect skewness additional torsion bridge deck may considerable effect making analysis design complex large research effort using analytical numerical well experimental approaches made last decades order better understand behavior type bridge static dynamic loadings special attention given researches related highway skew bridge subjected earthquake loadings fact first work subject reported ghobarah tso solution based beam model capable capturing flexural torsional modes proposed study dynamic response skewed highway bridges intermediate supports maragakis jennings obtained earthquake response skew bridge modelling bridge deck rigid body using finite element models socalled stick model firstly introduced wakefield stick model consists beam element representing bridge deck rigid flexible beam elements array translational rotational springs substructure bridge type model successfully used later works despite simplicity stick model provide reasonably good approximations preliminary assessment sophisticated models using shell beam elements also proposed study subject regarding behavior skew bridges traffic loads work subject performed models using combination shell beam elements assisted experimental testing models give good approximation require end user effort introduce information modelling structure element types sizes dimension material properties connection types etc therefore use limited determined case studies challenged parametric study monte carlo simulations large number case studies possible alternative develop analytical solution able capture behavior skew bridge give sufficient accuracy advantage analytical solution data input much simpler general information structure mass span length flexural torsional stiffness therefore use easy end user course able parametric study context main objective work derive analytical solution based beam theory skew bridge moving loads simplified model proposed order assimilate effect skewness support vertical vibration bridge exact integration time domain used solve differential equations models validated numerical examples comparing results obtained models parametric study performed simplified model order identify parameters significantly influence vertical dynamic response skew bridge traffic loads formulation problem skew bridge shown fig considered study work line abutment support forms orthogonal line centreline angle defined angle skewness length bridge taken length bridge idealized using following assumptions bridge deck modelled beam supported ends linear elastic behavior bridge deck stiff horizontal plane flexural deflection direction neglected bending stiffness torsional stiffness mass per unit length constant length warping distortion effects torsion bridge deck small enough neglected longitudinal axis deck width abutment bridge figure skew bridge plane view bridge model sketch assumptions bending bridge plane twisting axis principal types deformation bridge deck governing equations motion transverse torsional vibration transverse torsional loads radius gyration transverse deflection torsional rotation bridge deck transverse torsional loads applied bridge distance time respectively external damping mechanism introduced familiar term assumed proportional mass natural frequencies mode shapes using modal superposition technique solution free vibrations bridge deck decoupled infinite set modal generalized coordinates mode shapes nth flexural torsional mode shape generalized flexural torsional coordinates nth mode shape assumed governing equations free vibrations rewritten mode vibration solutions equations found many textbooks dynamic expressed following form sin cos sinh cosh sin cos six constants determined boundary conditions boundary conditions problem shown fig bridge ends abutments therefore support lines vertical displacement rotation axis bending moment axis using change coordinates shown fig following relationships obtained figure coordinate systems cos sin sin cos sin cos hence boundary conditions problem written sin sin cos sin cos sin cos cos six conditions homogeneous system equations obtained vector six constants determined matrix expressed sin cos sinh cosh cos sin cosh sinh sin cos sinh cosh sin cos cos sin cos sin eigenvalues calculated solving det noted determinant matrix expressed function unique variable extraction eigenvalues performed using symbolic matical program maple matlab fact study symbolic calculation implemented matlab used extract values desired modes used dynamic calculation eigenvector corresponding nht mode obtained applying singular value decomposition matrix orthogonality relationship order apply modal superposition technique solving forced vibration problems skew bridges necessary determine orthogonality relationship mode shapes basis equations equations reformulated multiplying sides arbitrary mode respectively integrating respect length one obtains cos cos cos sin means using integration parts side equations twice applying boundary conditions derived problem gives tan tan interchanging indices equation subtracting original form gives following relations tan tan next subtracting equation equation gives rise due fact condition established fulfilled corresponds orthogonality relationship skew bridge vibration induced moving load convoy moving loads natural frequencies associated mode shapes found orthogonality relationship modes known possible apply modal superposition technique obtaining response skew bridge due moving load vertical load twisting moment apply bridge deck determined cot magnitude moving load dirac delta function load eccentricity respect mass centre bridge deck section first part right side due skewness bridge second part due load eccentricity using modal superposition technique applying orthogonality relationship differential equations generalized coordinates uncoupled cot cot order solve differential equations several techniques applied work solution obtained using integration method based interpolation excitation advantage gives exact solution highly efficient numerical procedure solution time determined awi cqi velocity given cot cot coefficients depend structure parameters time step detail formulations found appendix moving load convoy moving loads figure moving loads case bridge forced convoy moving loads shown fig uncoupled differential equations generalized coordinates mode vibration given cot number moving loads distance first load load magnitude load solution obtained similar way case moving load attention needs paid determination modal loads right side loads enter bridge leave bridge modal loads associated loads zero simplified model part work simplified model developed order assimilate effect skewness support vertical vibration skew bridges well known skewness supports causes torsional moment bridge even vertical centric loads torsional moments turn certain influence bending moment particular negative bending moment introduced supports shown fig making purpose vertical flexure skew beam behaves like beam words beam rotational support stiffness shown fig noted negative bending moments supports change load position bridge therefore stiffness rotational support also changed different different supports order simplify calculation stiffness rotational support considered supports assumption stiffness rotational support determined additions previously adopted assumptions following additional assumptions used simplified model vertical vibration taken account model load eccentricity considered bridge deck modelled beam theory figure diagram bending moment skew bridge static load simplified model adopted skew bridge natural frequencies mode shapes governing equation free vibration simplified model similar solution equation given determination frequencies correspondent mode shapes solving homogeneous system equations vector containing four mode shape coefficients characteristic matrix determined applying boundary conditions simplified model proposed study boundary conditions vertical displacement supports equilibrium moments supports therefore characteristic matrix obtained cos cos sin sinh cosh sin cos cosh sinh sinh cosh procedure obtain eigenvalues eigenvector similar previously described section orthogonality relationship similar analysis section equation rewritten using boundary conditions simplified model interchanging indices subtracting resulting equation original form gives orthogonality relationship mode shapes simplified model moving load convoy moving loads dynamic response bridge moving loads obtained using way described analytical model section difference torsional response eliminated calculation numerical validations two numerical examples used order validate proposed models results obtained proposed models compared obtained finite element simulations example model developed program feap built beam element stick model moving load convoy moving loads applied nodal forces along centreline axis using amplitude functions dynamic responses models obtained solving time domain using modal superposition technique time step examples first five modes vibration considered calculation constant damping ratio assumed considered modes attention paid select total number modes vibration considered models since first five modes vibration obtained model always corresponding first five modes obtained analytical simplified models figure cross sections example example example skew slab bridge moving load skew slab bridge considered example skew angle bridge bridge cross section bridge shown fig following geometric mechanical characteristics used calculation elastic modulus poisson coefficient properties cross section damping ratio bridge subjected action moving load constant speed frequencies first five modes considered calculation extracted listed table models noted good agreement natural frequency analytical simplified models fact maximum difference frequency models exceed similar agreement also observed dynamic responses terms vertical displacement acceleration three models shown fig result remarked proposed simplified model enable simulate vertical dynamic response skew bridge table frequencies first five modes vibration different models modes anal model simpl model model description mode model mode model mode model mode model mode model example skew bridge convoy moving loads example attempts simulate dynamic response railway bridge hslm train desired application proposed displacement anal model simpl model model times acceleration anal model simpl model model times figure dynamic responses moving load displacement acceleration analytical simplified methods presented paper studied bridge typical bridge designed cross section shown fig skew angle considered bridge geometric mechanical properties bridge cross section used calculation elastic modulus poisson coefficient damping ratio train consists intermediate coaches power coach end coach either sides train total train axles load dynamic analysis carried different train speeds ranging increment vertical displacement acceleration obtained compared models envelope maximum vertical displacement acceleration also depicted models order validate proposed analytical simplified model presented paper table gives natural frequencies first five modes vibration considered calculation known bridge train velocities resonance estimated using following formula fundamental frequency regular distance load axles train according first three resonance peaks occur train velocities almost dynamic response train speed shown fig observed train speed near second critical speed responses amplified axle passing bridge envelope curves maximum vertical displacement acceleration shown fig noted fig considered range train velocities two peaks response displacement acceleration occur speeds closed predicted critical trains therefore remarked estimation train velocities resonance proposed still valid skew bridge furthermore figs concluded results obtained using analytical simplified model agree well ones obtained using model noted time consumed calculation using analytical simplified model approximately times faster ones using model cpu time required completing analysis using analytical model time model standard equipped intel xeon processor ghz ram table frequencies first five modes vibration different models modes anal model simpl model model description mode model mode model mode model mode model mode model parametric study part paper three parametric studies performed using simplified model order identify parameters influence significantly vertical dynamic response skew bridge moving loads study value studied parameter changed dynamic responses train corresponding value parameter obtained depicted function studied parameter basic properties skew bridge example adopted section effect skew angle figure shows maximum dynamic responses vary skew angle bridge forced train observed fig skewness important influence maximum vertical displacement bridge general displacement decreases skew angle increases sharp change slope observed skew angle value skew angle displacement decreases quickly furthermore changing train velocity resonance also observed skewness changed fact train velocity anal model simpl model model displacement times anal model simpl model model acceleration times figure dynamic responses train velocity displacement acceleration resonance increases skewness increases regarding maximum acceleration skew angle pronounced influence acceleration hardly increases skew angle grows effect torsional flexural stiffness ratio study torsional stiffness changed respect flexural stiffness ratio varies range figure shows variation maximum dynamic responses function torsional flexural stiffness ratio observed maximum vertical displacement increases slightly displacement anal model simpl model model velocity acceleration anal model simpl model model velocity figure envelope maximum response train displacement acceleration ratio increases maximum acceleration barely changed noted skew angle used study constant skew angle range skewness small influence dynamic response bridge mentioned preceding section shown fig result torsional stiffness pronounced influence vertical deflection small skew angles larger skew angle example torsional stiffness noticeable effect maximum vertical displacement shown fig maximum acceleration almost completely unaffected torsional stiffness acceleration displacement ocit skew angle ocit skew angle figure effect skewness dynamic responses displacement acceleration skew angles selected see fig displacement acceleration ocit ocit figure effect torsional flexural stiffness ratio dynamic responses skew angle displacement acceleration effect span length part paper influence span length dynamic response skew bridge carried span length changed increment order obtain consistent comparison results obtained parametric study cross section bridge redesigned span length using design criteria ratio depth cross section span length acceleration displacement ocit ocit figure effect torsional flexural stiffness ratio maximum dynamic responses skew angle displacement acceleration constant ratio usually applied railway bridge design depth cross section changed bridge length dimensions cross section considered unmodified basic properties cross section needed parametric study listed table table principal properties bridge parametric study first natural frequency corresponding span length obtained depicted fig different skew angles varying variation magnitude first natural frequency skew angle span length also obtained shown fig observed variation frequency span length generated skewness effect variation greater span length shorter decreases almost linearly span length therefore remarked span length decreases skewness effect bridge term natural frequency variation first natural frequency span length span length figure influence span length natural frequency skew bridge first natural frequency variation frequency well known dynamic response bridge traffic loads depends properties vehicle traveling bridge proper characteristics bridge parametric study traffic loads unmodified characteristics bridge changed span length therefore comparison dynamic responses term displacement acceleration determined train velocity consistent consistent comparison peak corresponding second train velocity resonance span length compared particular dynamic amplification factor daf vertical displacement maximum vertical acceleration used compare depicted fig observed daf decreases span length increases reduction variation magnitude daf displacement different skew angles span length increases however reduction variation magnitude maximum acceleration observed different skew angles remarked span length reduces skewness effect dynamic response bridge term vertical acceleration acceleration daf span length span length figure maximum dynamic responses skew bridge peak corresponding second velocity resonance different skew angles dynamic amplification factor displacement acceleration conclusions paper analytical model determining dynamic response skew bridge moving loads presented simplified model also proposed modal superposition technique used models decompose differential equation motions natural frequencies mode shapes orthogonality relationship determined boundary conditions modal equations solved exact integration therefore models highly accurate robust computationally efficient proposed models validated results obtained models using modal superposition method furthermore results obtained paper following conclusions made estimation train velocities resonance proposed still valid skew bridge grade skewness bridge plays important role dynamic behavior bridge term vertical displacement maximum vertical displacement decreases skew angle increases vibration bridge term vertical acceleration hardly affected skewness critical skew angle effect skewness noticeable cross section used parametric study critical skew angle torsional stiffness really important influence vibration bridge term vertical displacement skew angle larger critical skew angle vertical acceleration unaffected torsional stiffness span length reduces skewness effect dynamic behavior skew bridge term natural frequency acceleration appendix parameters exact integration sin cos sin cos sin sin cos sin sin cos sin cos sin cos acknowledgement authors grateful support mineco spanish government project edinpf ref support provided technical university madrid spain references kollbrunner basler torsion strucutres engineering approach berlin manterola bridges design calculation construction spanish colegio ingenieros caminos canales puertos madrid spain ghobarah tso seismic analysis skewed highway bridges intermediate supports earthquake engineering structural dynamics maragakis jennings analytical models rigid body motions earthquake engineering structural dynamics january wakefield nazmy billington analysis seismic failure skew bridge journal structural engineering meng lui seismic analysis assessment skew highway bridge engineering structures meng lui liu dynamic response skew highway bridges journal earthquake engineering nielson desroches analytical seismic fragility curves typical bridges central southeastern united states earthquake spectra pekcan seismic response skewed bridges earthquake engineering engineering vibration kaviani zareian taciroglu seismic behavior reinforced concrete bridges abutments engineering structures yang werner desroches seismic fragility analysis skewed bridges central southeastern united states engineering structures meng lui refined stick model dynamic analysis skew highway bridges journal bridge engineering nouri ahmadi influence skew angle continuous composite girder bridge journal bridge engineering deng phares greimann shryack hoffman behavior curved skewed bridges integral abutments journal constructional steel research mallick raychowdhury seismic analysis highway skew bridges nonlinear interaction transportation geotechnics bishara liu skew composite bridges journal structural engineering helba kennedy skew composite bridges ultimate load canadian journal civil engineering khaloo mirzabozorg load distribution factors simply supported skew bridges journal bridge engineering menassa mabsout tarhini frederick influence skew angle reinforced concrete slab bridges journal bridge engineering ashebo chan evaluation dynamic loads skew box girder continuous bridge part field test modal analysis engineering structures sheng scanlon linzell skewed concrete box girder bridge static dynamic testing analysis engineering structures chopra dynamics structures theory applications earthquake engineering edition prentice hall taylor element analysis program url http cen actions structures part traffic loads bridges rue stassart brussels
5
"efficient pac learning crowd apr pranjal avrim nika yishay abstract recent years crowdsourcing beco(...TRUNCATED)
8
"automated identification trampoline skills using computer vision extracted pose estimation paul con(...TRUNCATED)
1
"parsing methods streamlined sep luca breveglieri stefano crespi reghizzi angelo morzenti dipartimen(...TRUNCATED)
6
"similarity rasmus pagh ninh pham francesco morten mar university copenhagen denmark abstract presen(...TRUNCATED)
8
"jan completion derived double centralizer marco porta liran shaul amnon yekutieli abstract let comm(...TRUNCATED)
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
30
Edit dataset card